• 1 Post
  • 42 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle




  • Python is quite slow, so will use more CPU cycles than many other languages. If you’re doing data-heavy stuff, it’ll probably also use more RAM than, say C, where you can control types and memory layout of structs.

    That being said, for services, I typically use FastAPI, because it’s just so quick to develop stuff in Python. I don’t do heavy stuff in Python; that’s done by packages that wrap binaries complied from C, C++, Fortran, or CUDA. If I need tight-loops, I either entirely switch to a different language (Rust, lately), or I write a library and interact with it with ctypes.







  • I’ve used them as a proxy for a web app at the last place I worked. Was just hoping they’d block unwanted/malicious traffic (not sure if it was needed, and it wasn’t my choice). I, personally, didn’t have any problems with their service.

    Now, if you take a step back, and look at the big picture, they are so big and ubiquitous that they are a threat to the WWW itself. They are probably one of the most valuable targets for malicious actors and nation states. Even if Cloudflare is able to defend against infiltration and attacks in perpetuity, they have much of the net locked-in, and will enshittify to keep profits increasing in a market they’ve almost completely saturated.

    Also, CAPTCHAs are annoying.










  • IDK, looks like 48GB cloud pricing would be 0.35/hr => $255/month. Used 3090s go for $700. Two 3090s would give you 48GB of VRAM, and cost $1400 (I’m assuming you can do “model-parallel” will Llama; never tried running an LLM, but it should be possible and work well). So, the break-even point would be <6 months. Hmm, but if Severless works well, that could be pretty cheap. Would probably take a few minutes to process and load a ~48GB model every cold start though?