• 2 Posts
  • 41 Comments
Joined 1 year ago
cake
Cake day: October 2nd, 2023

help-circle

  • bbuez@lemmy.worldtoSelfhosted@lemmy.world2real4me
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    The best code its given me I’d been able to search for and find where it was taken. Hey it helped me discover some real human blogs with vastly more helpful information.

    (If you’re curious, it was circa when there was that weird infight at openclosedAI with altman, I prompted to give code to find the rotational inertia per axis and to my surprise and suspicion the answer made too much sense. Backsearching I found where I believe it got this answer from)


  • +1 for Proxmox, has been a fun experience as there are plenty of resources and helper scripts to get you off the ground, jellyfin was the first thing I migrated from my PC, hardware encoding may give you a bit of a tussle but nothing unsolveable. Also note Proxmox is Debian under the hood, so you may find it easy to work with. I looked into unraid, it seems great if all you’re doing for the most part is storage, if you want Linux containers and virtual machines, proxmox js your bet.

    I got a small 4 bay 2U server from a friend on the cheap, 1000$ should get you relatively nice new or slightly older used hardware. Even just a PC with a nice amount of drive bays will get you started. And drives are cheap, a raid 1 setup was one of the things I did.

    In the end I’ll likely get a separate NAS rack server just to segregate functions, but as of now I simply have a Proxmox LXC mounted to my NAS drives and runs samba to expose them.

    Tailscale is a nice set and forget solution for VPN access, I ended up going the route of getting an SSL certified domain and beefing up my firewall a bit. The bit I’ve messed with it it certainly has a learning curve greater than openvpn, but is much more hardened and versatile.

    As for pihole, I’ve found AdGuard Home to be just about a suitable replacement, and can be installed along openwrt, though I have a bit of an unconventional router with 512MB of RAM so YMMV




  • Gramps needed his excel icon - on the monitor I might add - or else. Debloat and activation scripts got him his windows 7 and office 2007 experience back, he was very appreciative of my “hack”, merely for the same experience he paid for back some 15 years.

    He doesn’t know what a “Linux” is, but I am greatful that people are still invested enough to make utilities to return back to a more user centric experience in windows - even if I certainly don’t care to go back



  • Well you offhandedly gave “elon bad” memes precedence over actual critiques being offered, nobody who actually cares about this moon thing gives a damn about elon memes, so I expect to discuss the merits of the mission plan off its merits alone.

    Smartereveryday was largely on about culture at NASA from what I remember from that video. That and the lack of hypergolics.

    It may be a long watch but please actually watch the whole thing, he’s very well spoken and ultimately optimistic (as am I) about going back. But I am certain he had more to mention that just hypergolics. I can list a few

    • astronaut access to the surface
    • stability on landing with a high COM
    • number of refuels necessary given nominal boiloff
    • lack of a mockup vehicle for astronaut training
    • undemonstrated orbital refueling (no bleeding the header tank is not a fuel transfer as per flight 3)
    • yes the hypergolics, you don’t want to be stuck on the moon.

    If these are “intentionally obtuse” points, well then welcome to aerospace engineering, its called rocket science for a reason.

    And Destins point about the culture? People aren’t speaking their critiques when they’re most necessary to hear, people are afraid to speak. How does that contribute to a program which may or may not have flaws (that could be remedied), when no flaws are at least pointed out? Well look at Boeing for one.

    The fact you don’t know how risky Apollo was to the astronauts shows you don’t know much about this

    I mentioned Apollo 1, right? Im pretty sure I mentioned Apollo one and how they perished on the pad and it nearly stopped the program. Now if you’re going to be intentially obtuse, then I bid you a good day.


  • Lost me at the second paragraph, Elon most certainly can be a complete moron while SpaceX remains a competent launch provider with, but to ignore his track record and business dealings in considering HLS would be a lapse in judgement.

    Aside from the man, the plan of starship is vague at best, and given 2 billion in public funds is planned to be spent on starship this year alone, I would certainly like to know more details… as NASA does too:

    20 launches, up from musks initial 8, will be needed to fuel the craft

    Contracts have deadlines and astronauts need assurances

    It’s really cringe

    If NASA is to a point healthy critique is considered cringe, then I doubt we’ll be on the moon for long. Sure there’s some rashness, but in the publics eye, do you think Apollo could’ve succeeded if they had dismissed hardware failures as RUDs?

    Apollo 1 nearly ended the program, yes it was the deaths of those astronauts that prompted that, but its necessary rigor that prevents another such accident. An inherent con of the trial-by-fire method SpaceX has had is the potential to miss something that wasn’t an immediate issue. This can be mitigated, but is a valid source of concern for the engineer.

    I however am not nearly qualified to make a call. But I feel as though this video from the channel SmarterEverDay (whose family was involved in Apollo) sums up a set of valid concerns that I think anybody with interest in these this should at least hear.

    I want us to go back to the moon just as the next person, but remember: Apollo cost some $200B in todays money, part of that cost was the extensive checks needed to avert tragedy, we must be sure we’re not cutting that its only a natural concern. And we can’t make heroes of men while we’re at it, nobody is infallible, if the proposal is solid it will be the one to take us regardless who’s running the show. Or if its not, we cannot afford to make mission proposals personal.




  • We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don’t understand intelligence, certainly not enough to build one.

    If we want to take from definitions, here is Merriam Webster

    (1)

    : the ability to learn or understand or to deal with new or trying >situations : reason

    also : the skilled use of reason

    (2)

    : the ability to apply knowledge to manipulate one’s >environment or to think abstractly as measured by objective >criteria (such as tests)

    The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There’s also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.


  • I don’t want to spam this link but seriously watch this 3blue1brown video on how text transformers work. You’re right on that last part, but its a far fetch from an intelligence. Just a very intelligent use of statistical methods. But its precisely that reason that reason it can be “convinced”, because parameters restraining its output have to be weighed into the model, so its just a statistic that will fail.

    Im not intending to downplay the significance of GPTs, but we need to baseline the hype around them before we can discuss where AI goes next, and what it can mean for people. Also far before we use it for any secure services, because we’ve already seen what can happen


  • Building my own training set is something I would certainly want to do eventually. Ive been messing with Mistral Instruct using GPT4ALL and its genuinely impressive how quick my 2060 can hallucinate relatively accurate information, but its also evident of limitations. IE I tell it I do not want to use AWS or another cloud hosting service, it will just return a list of suggested services not including AWS. Most certainly a limit of its training data but still impressive.

    Anyone suggesting to use LLMs to manage people or resources are better off flipping a coin on every thought, more than likely companies who are insistent on it will go belly up soon enough


  • The fallout of image generation will be even more incredible imo. Even if models do become even more capable, training off of post-'21 data will become increasingly polluted and difficult to distinguish as models improve their output, which inevitably leads to model collapse. At least until we have a standardized way of flagging generated images opposed to real ones, but I don’t really like that future.

    Just on a tangent, openai claiming video models will help “AGI” understand the world around it is laughable to me. 3blue1brown released a very informative video on how text transformers work, and in principal all “AI” is at the moment is very clever statistics and lots of matrix multiplication. How our minds process and retain information is by far more complicated, as we don’t fully understand ourselves yet and we are a grand leap away from ever emulating a true mind.

    All that to say is I can’t wait for people to realize: oh hey that is just to try to replace talent in film production coming from silicon valley