• 1 Post
  • 51 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle

  • That’s a very interesting point of view, and indeed well formulated in the video!

    I don’t necessarily agree with it though. I as a human being have grown up and learned from experience and the experiences of previous humans that were documented or directly communicated to me. I can see no inherent difference with an artificial intelligence learning on the same data.

    I never did all the experiments, nor the research previous scientists did, but I trust their reproducibility and logical conclusions. I think on the same way, artificial intelligence could theoretically also learn these things based on previous documented findings. This would be an ideal “général intelligence” AI.

    The main problem I think, is that AI needs to be even more computationally intensive and complex for it to be able to get to these advanced levels of understanding. And at this point, I see it as a fun theoretical exercise without actual practical benefit: the cost (both in money, time and energy) seems far too large to eventually create something that we can already do as humans ourselves.

    The current state of LLMs is one of very basic “semblance” of understanding, and close to what you describe as probability based conversation.

    I feel that AI is best at doing very specific tasks, were the problem space is small enough for it to actually learn the underlying model. In the same way I think that LLMs are best at language: rewriting text or generating stuff. What companies seem to think though is because a model is wel at producing realistic language, that it is also competent at the contents of what it is writing. And again, for that to be true, it needs a much more advanced method of calculation than is currently available.

    Take this all with a grain of salt though, as I am no expert on the matter. I am an electrical engineer who no longer works in the sector due to mental issues, but with an interest in computer science.



  • Honestly, I am often a passionate little toddler explaining stuff that amazed me when I learned it.

    There was an issue around male behaviour at an inclusive boxing gym where I train, and I was suddenly super conscious about this and afraid I was part of the problem. But luckily I had some super nice conversations with some of the women training there that explained me that this was not an issue. Of course I am no saint and have also made mistakes, but being passionate about stuff is not the same as mansplaining, and luckily others are able to tell the difference.

    Something I did learn from that situation, is how women / non-men are often given less space to speak, and as a man who also loves to speak, I’ve been more conscious in giving space to everyone. Which is in fact a nice experience, as you learn way more about others and the world when you’re not talking :). It’s a peaceful experience.



  • While I understand where you’re coming from, I believe that it distracts from a massive positive effect that the GPL has: the way it ensures collaboration. Lots of contributors to GPL software do so in the knowledge that they are working on something great together. I myself have felt discouraged to contribute to MIT licensed software, because I know that others might just take all the hard work, make something proprietary of it and give nothing back.

    I see GPL as some sort of public transaction, it is indeed more limiting than MIT and offers less pure freedom in that sense. But I just love how it uses copyright not for enforcing licensing payment for some private entity, but enforces a contribution to the community as a whole. I find this quite beautiful.




  • Thank you for taking the time to respond. With siphoning money, I mean not giving actual value in return. The NFT market was a clear example of this: get some hype going, sell the promise of great gains on your investment, once the ball gets rolling make sure you’re out before they realise it’s actually worth nothing. In the end, some smart and cunning people sucked a lot of money from often poor and misinformed small investors.

    I think I have an inherent idea of value, as in: the value it has in a human life and the amount of effort needed to produce it. This has become very detached from economical value, as there you can have speculation, pumping value and all that other crap. I think that’s what frustrates me about the current financial climate: I just want to be able to pay the people who helped produce the product I buy fairly with respect to how much time and work they put it. Currently however, so much money is being transferred to people “just for having money”. The idea that money in and of itself can make more money is such a horrible perversion of the original idea of trade…


  • Your last paragraph is not how money should work at all. Money should represent value that ideally doesn’t change, so that the money I receive for selling a can is worth a can, not a Lambo an not a grain of sand. What your describing is closer to speculation and pyramid schemes (NFTs for example).

    Either try and explain to me how BTC could be an ideal currency that fixes the problems in existing currency, or try to explain me how it’s really cool as an investment thing to siphon money from others, but don’t try and do both at the same time.







  • I think the issue is not wether it’s sentient or not, it’s how much agency you give it to control stuff.

    Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn’t be able to turn it off anymore without getting shot.

    The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

    An atomic bomb doesn’t pass a Turing test, but it’s a fucking scary thing nonetheless.