This is why sideloading addons is so important. They’ve recently removed the bypass-paywalls-clean addon too.
On the desktop version you can easily sideload addons but on the mobile version they forbid this :(
This is why sideloading addons is so important. They’ve recently removed the bypass-paywalls-clean addon too.
On the desktop version you can easily sideload addons but on the mobile version they forbid this :(
Meh in Holland we are in NATO and they send Tu-53 bear bombers into our airspace every few months. They are escorted by fighters until they leave. I think they do the same with neighbouring countries like Denmark. It hardly even makes the news anymore, it’s so common.
Bloomberg - Are you a robot?
Lol 😆 No i’m not a robot 🤖
The question is always: What do you want to use it for?
When raspberry started the landscape was very difficult. Small computer boards were expensive, now there’s the N100 if you need a tiny cheap computer. Microcontrollers were really dumb and unconnected, now there’s the ESP32 which has WiFi and Bluetooth and decent performance. Right in the middle of this wide spectrum is the raspberry pi and its clones.
This is a very different situation than in the introduction era where PCs were heavy and expensive and microcontrollers were dumb. There was a much wider niche for the raspberry then. For a small server I would now get a $100 N100 from aliexpress. For embedded electronics I would grab a $10 ESP32. Only in the middle is the raspberry pi, but the problem is, it’s only in the middle in terms of performance, not price. A raspberry pi with case, PSU, storage etc costs more than a decked out N100, while actually being slower.
The only remaining usecase I see for a pi 5 would be an electronics project where you need some more compute than a microcontroller can provide, like some machine vision project. Otherwise:
Yes I was just writing that, I would love to see more integrations that can talk against ollama.
One thing I’d love to see in Firefox is a way to offload the translation engine to my local ollama server. This way I can get much better translations but still have everything private.
If you had a visual disability you would certainly think otherwise.
Or maybe Affinity Designer? I bought that a few years ago for Mac and it was really good.
I don’t think it will.
Microsoft’s endgame is being the lord and master of AI. AI thrives on knowing more data about the user. What good is an assistant if it doesn’t know your habits, your wishes and desires, your schedule and your attitude towards each person in your life?
This is not really a feature primarily aimed at helping the user directly (even though it’s currently marketed as such), but to have the AI build up a repository of knowledge about you. Which is hopefully used locally only. For now this seems to be the case, but knowing Microsoft, once they have established themselves as the leading product they will start monetising it in every way possible.
Of course I’m very unhappy with this too. I’d like to have an AI assistant. But it has to be FOSS, and owned and operated by me. I don’t trust microsoft in any way. I’m already playing around with ollama, RAG scripting etc. It won’t be as good as simply signing up to OpenAI, Google or Microsoft but at least it will be mine.
Yep this is one of the reasons I kept deleting my account even before the whole spez drama.
Too bad they don’t do OpenPGP like Yubikeys do. I still need that even more (much more!) than Fido2. Sites are so slow adopting Fido2.
I don’t use it for email but I use it for SSH and my password manager (“pass”). And yes I know SSH can use Fido2 natively as well but there’s many embedded SSH daemons that don’t support that yet.
Luckily Yubico is still around but I’m betting on them going down the drain (subscription models etc) soon because they were taken over by a venture capital firm :(
Even all the telemetry?
I really hate dealing with group policies (and I work in enterprise endpoint management, I prefer more modern management). AD/Group policies can only be updated on site or VPN, and they’re only really instructions for registry settings anyway.
But I’ll try that out. I don’t have a windows server though, nor do I want one. But I guess I could use gpedit.
I have LTSC 2021 officially (MSDN) and I have to say I’m not very impressed. You still can’t turn off the telemetry crap. There is still a windows store. There’s a bit less bundled scamware but beside that it’s a bit overrated IMO.
Yeah that slogan really captured very well the intentions at the world economic forum.
I know it’s not what they officially stated but it really captured (they since walked it back and said it only was meant to “describe emerging trends”) the intentions of what happens when they all come to Davos and divide the world between them.
But I don’t believe “as a service” models are more sustainable. They will just enable more rent-seeking behaviour meaning we will get even less for our money.
I didn’t think it was super creepy but I thought the voice was so overly enthusiastic and overacted and soooo sugary. bleh.
This won’t work for me unless that can be customised and toned down a lot.
The audio from the AI also seemed to cut out a lot during the demo. So it does appear like no shenanigans to me.
It depends on your prompt/context size too. The more you have the more memory you need. Try to check the memory usage of your GPU with GPU-Z with different models and scenarios.
Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.
But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.
Edit: Ah, wait, I know what’s going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.
Training your own will be very difficult. You will need to gather so much data to get a model that has basic language understanding.
What I would do (and am doing) is just taking something like llama3 or mistral and adding your own content using RAG techniques.
But fair play if you do manage to train a real model!
Yeah she clarified that literally, it’s not linked in the article.
https://twitter.com/RealSexyCyborg/status/1677480809450835969
I can’t find the source of her saying it was about the IME thing but I recall reading that from a person close to her. She had just raised it before all this happened. Edit: Oh wait, that’s here: https://skepchick.org/2023/08/maker-naomi-wu-is-silenced-by-chinese-authorities-and-why-i-blame-elon-musk/ (This was linked on wikipedia)
And yes she’s a great person, she was often criticised for being a CCP stooge but that was BS. She was as outspoken as one can be being in China (and unfortunately, clearly a bit more than that).
Hopefully it will remain the “Trump era” and not the “first Trump era”. This is my main worry right now.