I see, so there is indeed a broader context to the burning alone, it was also with additional verbal hatred and then possibly the location, and the overall intention. I think this makes it clearer. Thanks
Just a stranger trying things.
I see, so there is indeed a broader context to the burning alone, it was also with additional verbal hatred and then possibly the location, and the overall intention. I think this makes it clearer. Thanks
Not familiar with the guy himself who maybe does deserve criticism and prison, but about the Quran burning, is it genuinely fair to sentence someone to prison for that? Is it equivalent to burning the cross? The Swedish flag? I might be mission a broader context, but I don’t feel like someone burning my symbol or flag should be punished with prison. Am I alone? I would hate it, don’t get me wrong, but I still feel it goes in freedom of expression.
I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.
I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.
If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.
Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B
Edit2: added links
Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.
Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.
Sure, anytime, create a new post, tag me if you need me specifically to have a look. I’ve used docker on synology for years, have gone through major updates and while I’m certainly no expert, I’ve learned some things which could be helpful.
I know what you’re talking about, happens to us all when we’re learning something new.
Want to share the details of a specific issue you’re facing, blocking you?
I understand your position. There is a learning curve to containers, but I can assure you that getting your basics on the topic will open a whole new world of possibilities and also make everything much easier for yourself. The vast majority of people run containers which make the services less brittle because they have their own tailored environment and don’t depend on the host libraries and packages and also brings increased security because the services can’t easily escape their boundaries rendering their potential vulnerabilities less of an issue compared to running those same services bare metal.
I started on synology too. There is a website called Marius hosting which focuses on tutorials for containers on synology, but his instructions have been updated the last few years to focus on spinning up containers manually rather than through the UI, which makes it more intimidating than it needs to be for beginners… I’ll link it here just as a reference. I’ll see if on the way back machine he shows the easier way and report back if I find something.
Edit: yes here is an original tutorial for Jellyfin (this method still works for me and is still how I use docker lately): https://web.archive.org/web/20210305002024/https://mariushosting.com/how-to-install-jellyfin-on-your-synology-nas/
To answer your question more specifically, most people set up the pi with docker, using services which have a front end accessible in the browser. They basically use their browser to navigate to the front end of the service they want to use and administer it like that. For instance portainer to manage their docker containers, or pihole for managing their firewall, or even jellyfin for their media which is both the website to consume the media and has an administrator dashboard.
Edit: this is in complement to using something like tailscale which basically allows you to access these services away from home. They work in conjunction.
Tailscale is a good option.
Edit: I’m assuming you mean away from home, but if you mean in your local network just use SSH?
Indeed, quite surprising. You got to “stroke their fur the right way” so to speak haha
Also, I’m increasingly more impressed with the rapid progress reaching open-weights models: initially I was playing with Llama3.1-8B which is already quite useful for simple querries. Then lately I’ve been trying out Mistral-Nemo (12B) and Mistrall-Small (22B) and they are quite much more capable. I have a 12GB GPU and so far those are the most powerful models I can run decently. I’m using them to help me in writing tasks for ansible, learning the inner workings of the Linux kernel and some bootloader stuff. I find them quite helpful!
Someone recently referred me to this blog post about using RAG in open-webui. I have not tested if but the author seems to reach a good setup.
Perhaps this is of use to you?
I have no idea if ollama can handle multi-GPU. The 70B in it’s q2_k quantized form requires already 26GB of memory, so you would need at least that to run it well and that would only imply it could be entirely run on GPU, which is the best case scenario, but not at what speed.
I know some people with apple silicon who have enough memory to run the 70B model and for them it runs fast enough to be usable. You may be able to find more info about it online.
I wish I could. I have an RTX 3060 12GB, I run mostly llama3.1 8B versions in fp8, at 30-35 tokens/s.
Sure! It can be a bit of a steep learning curve at times but there are heaps of resources online, and LLMs can also be useful, even if it just in pointing you in the direction for further reading. Regardless, you can reach out to me or other great folks from the !localllama@sh.itjust.works or similar AI, ML or related communities!
Enjoy :)
For RAG, there are some tools available in open-webui, which are documented here: https://docs.openwebui.com/tutorials/features/rag They have plans for how to expand and improve it, which they describe here: https://docs.openwebui.com/roadmap#information-retrieval-rag-
For fine-tuning, I think this is (at least for now) out of scope. They focus on inferencing. I think the direction is to eventually help you create/manage your own data which you get from using LLMs using Open-WebUI, but the task of actually fine-tuning is not possible (yet) using either ollama or open-webui.
I have not used the RAG function yet, but besides following the instructions on how to set it up, your experience with RAG may also be somewhat limited depending on which embedding model you use. You may have to go and look for a good model (which is probably both small and efficient to re-scan your documents yet powerful to generate meaningful embeddings). Also, in case you didn’t know, the embeddings you generate are specific to an embedding model, so if you change that model you’ll have to rescan your whole documents library.
Edit: RAG seems a bit limited by the supported file types. You can get it here: https://github.com/open-webui/open-webui/blob/2fa94956f4e500bf5c42263124c758d8613ee05e/backend/apps/rag/main.py#L328 It seems not to support word documents, or PDFs, so mostly incompatible with documents which have advanced formatting and are WYSIWYG.
The interface called open-webui can run in a container, but ollama runs as a service on your system, from my understanding.
The models are local and only answer queries by default. It all happens on the system without any additional tools. Now, if you want to give them internet access, you can, it is an option you have to setup and open-webui makes that possible though I have not tried it myself. I just see it.
I have never heard of any llm “answer base queries offline before contacting their provider for support”. It’s almost impossible for the LLM to do it by itself without you setting things up for it that way.
whats great is that with ollama and webui, you can as easily run it all on one computer locally using the open-webui pip package or in a remote server using the container version of open-webui.
Ive run both and the webui is really well done. It offers a number of advanced options, like the system prompt but also memory features, documents for RAG and even a built in python ide for when you want to execute python functions. You can even enable web browsing for your model.
I’m personally very pleased with open-webui and ollama and they both work wonders together. Hoghly recommend it! And the latest llama3.1 (in 8 and 70B variants) and llama3.2 (in 1 and 3B variants) work very well, even on CPU only, for the latter! Give it a shot, it is so easy to set up :)
They don’t have to have a backdoor. They are most likely in possession of a master key to decrypt your data:
Indeed, totally an Apple approach to modularity: it is a proprietary Apple SSD…