I have more devices sent over, so for me I just pass through /dev/disk and /run/udev along with the configs. Could start there?
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms
I have more devices sent over, so for me I just pass through /dev/disk and /run/udev along with the configs. Could start there?
You read about the teenager who fell in love with danaerys Targaryen who convinced him to join her, so he killed himself? Yeah, the public was not ready for AI
Situation - You run a discord server for alt right and weirdly obsessed about the military dudes. You have a secret clearance. Do you -
[ ] Share secret intel with people who have no need to see or know about it
[ ] Do nothing
Note: There are no political or military reasons to share this info, only ego boosting yourself. 30 seconds on the clock.
oh freaking awesome, this looks amazing! Thank you so much for this!
Time to start self hosting these for my friends
I’ll say hot take here, but I’m not sure this is the right solution, or at least it should be decided on a case by case basis and depends on how unsafe it is. They should be removed from society for a time absolutely, but not prison.
Stalking is a mental health issue, a fascination and obsession. I think it warrants being put into a mental health institution for rehabilitation. Prison isn’t going to help, it just removes the problem, but rehabilitation might
For most of you suggesting hosting a repository - yes but,
Host forgejo. Just host the git mirror. It comes with a package repo out of the box. Then you have the source code and the docker images
Generally there are not LLMs that do this, but you start building up a workflow. You speak, one service reads in the audio and translates it to text. Then you feed that into an LLM, it responds in text, and you have another service translate that into audio.
Home Assistant is the easiest way to get them all put together.
https://www.home-assistant.io/integrations/assist_pipeline
Edit agree with others below. Use the apps that are made for it.
This post is written as if there’s only one “community”. Why does there need to be a primary? I’m here and I’m happy. If I have questions I search online or ask here, same as any other community
Could you at least change the title so we know what it’s about, like “my develops suggestions”
Seconded with Matrix. All I’m wanting for it is for someone to make a Discord/Revolt UI frontend for Matrix 2.0 and it’ll be a drop in replacement
Oh yeah, critical component. And vram, in fact I would only consider LLMs on a 3000+ card right now, they require quite a bit of vram
NVidia is great in a server, drivers are a pain but do-able. I have a 3000 series that I use regularly and pass into my kubernetes cluster. NVidia on a gaming rig linux is fine, but there is more overhead with the drivers.
AMD is great in gaming servers, but doesn’t have CUDA, so it’s not as useful in a server environment in my experience - if you’re thinking of doing CUDA workloads like hosting LLMs.
1060 will be a noticeable step in Jellyfin
DNS is of course the preferred approach
I wouldn’t worry about mounting your nfs shares directly to those host unless you need to. Compose has an operator similar to k8s that lets docker itself manage the shares, which is insanely useful if you lose your host. Then you don’t have to have piles of scripts to mount them.
version: "3.2"
services:
rsyslog:
image: jumanjiman/rsyslog
ports:
- "514:514"
- "514:514/udp"
volumes:
- type: volume
source: example
target: /nfs
volume:
nocopy: true
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=10.40.0.199,nolock,soft,rw"
device: ":/docker/example"
Come to uplifting news! We turn news that seems good and still make you feel horrible!
Can we please just celebrate partial wins? Please?
Just another 2 years bro we’re so close we’re almost there it’s in beta bro trust me it’s almost there
LLMs use a ton of VRAM, the more VRAM you have the better.
If you just need an API, then TabbyAPI is pretty great.
If you need a full UI, then Oogabooga’s TextGenration WebUI is a good place to start
We’re already in the dystopia.
Most of this has already been a thing for a while. This is pure marketing fluff