• 0 Posts
  • 16 Comments
Joined 11 months ago
cake
Cake day: December 27th, 2023

help-circle

  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)



  • smb@lemmy.mltoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    that a moderately clever human can talk them into doing pretty much anything.

    besides that LLMs are good enough to let moderately clever humans believe that they actually got an answer that was more than guessing and probabilities based on millions of trolls messages, advertising lies, fantasy books, scammer webpages, fake news, astroturfing, propaganda of the past centuries including the current made up narratives and a quite long prompt invisible to that human.

    cheerio!




  • after looking at the ticket myself i think the relevant things IMHO are:

    • a person filed a bug report due to not seeing what changes in the new version caused a different behaviour
    • that person seemed pushy, first telling the dev where patches should be sent to (is this normal? i guess not, better let the dev decide where patches go or -in this case- if patches are needed at all), then coming up with ceo style wordings (highly visible, customer experience of untested but nevertheless released to live product is bad due to this (implicitly “your”) bug)
    • pushiness is counterparted by “please help”
    • free-of-charge consulting was given by the one pointing to changes likely beeing visible in changelog (i did not look though) but nevertheless it was pointed out to the parameter which assumes RTFM (if docs were indeed updated) that a default value had changed and its behavior could be adjusted by using that given parameter.

    up to there that person -belonging to M$ or not (don’t know and don’t care) - behaved IMHO rather correctly, submitting a bug report for something that looked like it, beeing a bit pushy, wanting priority, trying to command, but still formally at least “asking” for help. but at that point the “bug” seemed to have been resolved to me, it looks like the person was either not reading the manual and changelog, or maybe manual or changelog lacks that information, but that was not stated later so i guess that person just did not read neither changelog nor manual.

    instead - so it seems to me - that person demanded immediate and free-of-charge consulting of how exactly the switch should be used to work in that specific use case which would imply the dev looks into the example files, maybe try and error for himself just so that that person does not need to neither invest the time to learn use the software the company depends on, nor hire a consultant to do the work.

    i think (intentional or not) abusing a bug tracker for demanding free-of-charge enduser consulting by a dev is a bad idea unless one wants(!) to actively waste the precious time of the dev (that high priority ticket for the highly visible already live released product relies on) or has even worse intentions like:

    • uploading example files with exploits in them, pointing to the exact versions that include the RCE vulnerability that sample file would abuse and the “bug” was just reported cause it fits the version needed for exploitation and pressure was made by naming big companies to maybe make the dev run a vulnerable version on it on his workstation before someone finds out, so that an upstream attack could take place directly on the devs workstation. but thats just creating a fictive worst case scenario.

    to me this clearly looks like a “different culture” problem. in companies where all are paid from basically the same employer, abusing an internal bug tracker for quick internal consulting would probably be seen as just normal and best practice because the dev who knows and is actually working on the code is likely to have the solution right at hand without thinking much while the other person, who is in charge of quick fixing an untested but already live to customers released product, does not have sufficient knowledge of how the thing works and neither is given the time to learn or at least read changelogs and manual nor the time to learn the basics of general upstream software culture.

    in companies the https://en.m.wikipedia.org/wiki/Peter_principle could be a problem that imho likely leads to such situations, but this is a guess as i know nobody working there and i am not convinced that that person is in fact working for the named company, instead in that ticket shows up a name that i would assume to be a reason to not rely too much about names in the tickes system always be realnames.

    the behaviour that causes the bad postings here in this lemmy thread is to me likely “just” a culture problem and that person would be advised well if told to learn to know the open source culture, netiquette etc and learn to behave differently depending on to who, where and how they communicate with, what to expect and how to interact productively to the benefit of their upstream too, which is the “real price” all so often in open source. it could be that in the company that rolled out the untested product it is seen to be best practice to immediately grab the dev who knows a software and let him help you with whatever you can’t on your own (for whatever reason) whenever you manage to encounter one =]

    i assume the pushyness could likely come from their hierarchy. it is not uncommon that so called leaders just create pressure to below because they maybe have no clue of the thing and not want to gain that clue, but that i cannot know, its just a picture in my head. but in a company that seems to put pressure on releasing an untested product to customers i guess i am not too wrong with the direction of that assumption. what the company maybe should learn is that releasing untested and/or unfinished products to live is a bad habit. but i also assume that if they wanted to learn that, they maybe would have started to learn it like roundabout 2 decades ago. again, i do not know for what company that person works -or worked- for, could be just a subcontractor of the named one too. and also could be that the pushyness (telling its for m$, that its live, has impact to customers etc) was really decided by someone up the latter who would have literally no experience at all on how to handle upstream in such situations. hierarchies can be very dysfunctional sometimes and in companies saying “impact to customers” sometimes is likely the same as saying “boss says asap”.

    what i would suggest their customers (those who were given a beta version as production ready) should learn is that when someone (maybe) continously delivers differently than advertised, that after some few times of experiencing this, the customer would be insane when assuming that that bad behaviour would vanish by pure hope + throwing money into hands where money maybe already didn’t help improving their habits for assumingly decades. And when feeding everhungry with money does not resolve the problems, that maybe looking towards those who do have a non-money-dependant grown-up culture could actually provide more really usable products. Evaluation of new solutions (which one would really be best for a specific usecase i.e.) or testing new versions before really rolling them out to live might be costly especially when done throughout, but can provide a lot of really high valueable stability otherwise unreachable by those who only throw money at shareholders of brands and maybe rely on pure hope for all of the rest. Especially when that brand maybe even officially anounced to remove their testing department ;+) what should a sane and educated customer expect then ? but again to note, i do not know which companies really are involved and how exactly. from the ticket i do not see which company that person directly works for, nor if the claim that m$ is involved is a fact or just a false claim in hope for quicker help (companies already too desperate to test products before live could be desperate again in need for even more help when their bad habits piled up too long and begin falling on their heads)


  • the xz vulnerability was done through a superflous dependency to systemd, xz was only the library that was abused to use systemd’s superflous dependency hell. sshd does not use xz, but systemd does depend on it. sshd does not need systemd, but it was attacked through its library dependency.

    we should remove any pointless dependencies that can be found on a system to prevent such attacks in future by reducing dependency based attack vectors to a minimum.

    also we should increase the overall level of privilege separation where systemd is a good bad example, just look at the init binary and its capability zoo.

    The company who hired “the” systemd developer should IMHO start to really fix these issues !

    so please hold your “$they have fixed it” back until the the root cause that made the xz dependency level attack possible in the first place has been really fixed =)

    Of course pointing it out was good, but now the root cause should be fixed, not just a random symptom that happened to be the first visible atrack that used this attack vector introduced by systemd.



  • there was a study saying that there is not “the” best way of learning, but it is best to combine multiple ways, like with an app, by book, listening to audio only (i listened to radio stations via internet and got some exercise for free), a bit of talking, visiting a country that only speaks that language and so on. trying everything a bit in parallel.

    that is because of our brain learns better when given more different types of “connections” to learn.

    i started with duolingo (website only, not the app and only the free parts) 4 years ago and now i speak quite fluently. but i also partly read a book about grammatics, visited a spanish speaking country (more than once), viewed movies with only subtitle in my language and did lots of phone calls in spanish only.

    my advice is:

    look at free apps, whatever pleases you, take chances, listen to the sound (movies, radio), try to speak, and read easy books or go through exercise books.

    duolingo is good to keep on going while not really motivated as the shortest thing that counts are really only minutes and one can choose to do something that is already easy. this way at least continuation is kept even if pace is down for a while. and it is much easier to go on with pace when not having really stopped.


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.


  • i have to admit, that my point ‘just don’t do it’ in reality does not garantee to prevent any trouble. it still is possible to be sued for things someone else did.

    also one suggestion to think about:

    if the seller just sprays some random changes over a book for every sold version, one would have differences in “every” sold version to every other sold version. by blindly changing those parts to something else you could reveal which exact two/three versions you had for diffing.

    UPDATE: someone else here had the same thought a bit earlier…

    my suggestion to not do it stays the same ;-)

    it could be interesting to figure things out how they work, what could be done to prevent or circumvent such prevention, but actually doing it seems risky no matter what.


  • have a look on “snowdrop” (search together with “steganography”), its basically the opposite of what you want, but worth mentioning here. watermarks could be placed into whitespace (not limited to actual spaces or linebreaks, intentionally changed usage of paragraphs, tabs or even page boundaries could possibly be detected after scanning andeven after OCR. IMHO snowdrop uses -depending on choosen operation mode- small errors like misspelled words, commata etc but also has a mode that comes along with fine grammar and without misspelled words…

    how do you make sure that by diff’ing two versions you do cover "everything’ that has been deliberately placed into both documents but share literally the same informations?

    lets say you bought two books at two different stores with two different watermarks. if the watermark contains the date and time of the purchase and the only difference of this were the minutes because you bought them within the same hour, the remaining watermark would point to all buyers that bought exactly this book in this hour - worldwide. but still it could be “very” precise depending on all other(!) buyers, if they exist at all within that timeframe. what if the watermark includes unix epoch? then the part which is the same in both watermarks would not be bound by hours, but by seconds, 10seconds, 100seconds etc.

    and you could not know if there were other watermarks hidden that just happened to be the same for your two (three.?) purchases (same country, continent, payment method, credit card holder name, name of internet provider used during purchase, browser used etc.) it fully depends on the creator of the watermark what would be included and what not. if you happem to know all that (without any possibleexemptions) you might be on the safe side, but if not…

    my general suggestion here is:

    • if you want to be sure to not getting into trouble, then just don’t do it.
    • if that book is too expensive compared to its content, just not buying it possibly also helps the market to fix the problem.
    • save that time and instead help those who already fight for a better world.
    • search already licence free books (or such as “cc” licensed) and promote those instead, help improving free resources like openstreetmap, wiki* but do not publish licence-poisoned content there, wtite it yourself, alway.
    • write your own book and publish it free.

    just to mention… the “safe” side sometimes seems limited but maybe is actually not, if you really look at it.