People already think in Satoshis. It has been this way for many, many years.
People already think in Satoshis. It has been this way for many, many years.
The lightning network on bitcoin is where most people should be transacting with bitcoin. It won’t fill up with any number of users, transactions are instant(less than a second), and very cheap(even when bitcoin fees are high). Nearly everyone who talks confidently about how bitcoin can’t scale is unaware of lightning and other developments over the last 5 years.
There isn’t a government run currency on the planet that has come anywhere remotely close to performing as good as bitcoin in the last decade. Give me something that goes up in value over something that only “slowly” falls in value like the EU or USD.
It must be nice to have such an innocent view of the world. When powerful people stop average people from having conversations, it is not a good thing. It doesn’t matter how much the internet tells you socialism and dictatorships are good, stopping discussion is bad.
Yeah, except I have no agenda. I just find it interesting that people who act like a disgusting regime is somehow doing the lords work because they are shutting down millions of people from having discussions because people don’t like musks’ childish attitude. It’s just that it is hip to see who can hate musk the most. X is a popular way to immediately get what you want out to millions of people. Some governments don’t want people to openly discuss things, and siding with stopping discussion is disgusting.
It helps all of us if there are fewer echo chambers. I know, I know, to you it’s not an echo chamber, but sadly that’s exactly what it is when anyone who disagrees with the majority is ridiculed and their points are ignored. I understand the comfort of just agreeing with the majority to try to feel like you are always right, but it’s simply not real.
Sure, I have no problem with that.
Hypotheticals are not bad. Thought expirements are not bad. Repeating things you hear from people just because they tell you to, that’s bad.
Clearly. The 3x lol really makes that obvious.
Hypothetical. Look it up. It is commonly used in discussions. Don’t let new words scare you.
Yeah, commenting on someone’s comment they made about my comment isn’t personal. It’s interesting how you know you dislike what I said but can’t come up with anything to actually say about it. Maybe you should take that as an indication that you don’t actually know why you parrot the things you do.
You are absolutely right. Brazil is not banning the internet. Reading comprehension is not your strong suit.
Also, if the only way you can feel like you are “winning” a discussion is by changing what other people say, then you simply are not winning at all. You are writing fan fiction about yourself to try to feel clever. It backfired.
It is really great that you can make stuff up. Hypothetical questions have been useful for as long as people have discussed things. I’m sorry that this hypothetical question offended you so much, but it is entirely different than “making stuff up”.
Would you support Brazil if their government ordered a complete internet ban?
Been using it for decades, never an issue for me. What in the world are you trying to download over there??
You don’t have to be delusional to self-sacrifice to try to make a difference. I’m so sick of people pretending like there is nothing they could possibly do to help, so they just keep hurting others. It’s just like every discussion on factory farms. At least try to help. It will make you feel better, and you can quit getting all defensive when people point out things that can be done.
I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.
When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I’m very certain that the ai model method is considerably less. Hopefully, we don’t start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn’t end up all living in paradise. There were just a whole lot less of them around.
This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.
This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.
I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.
I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.
This is an issue with many humans I’ve hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.
This is so similar to many people’s complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it’s not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it’s worth it if it isn’t even as good as people yet.
I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn’t have internet access, and there really weren’t agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can’t be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.
I don’t have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.
Believe it or not, it can be and is used as both. If you have any other questions or things you think won’t work, just ask! A lot has happened in the last 5 years, and most issues people have with Bitcoin have been long ago solved. It is incredible how much different things are now, lots of old issues(like dealing with 0.0000015 issues) are no longer a problem.