I hear, it actually significantly increases the chance of the miracle occurring when you pass the array into multiple threads. It’s a very mysterious algorithm.
I hear, it actually significantly increases the chance of the miracle occurring when you pass the array into multiple threads. It’s a very mysterious algorithm.
Personally, I’ve found Poetry somewhat painful for developing medium-sized or larger applications (which I guess Python really isn’t made for to begin with, but yeah).
Big problem is that its dependency resolution is probably a magnitude slower than it should be. Anytime we changed something about the dependencies, you’d wait for more than a minute on its verdict. Which is particularly painful, when you have to resolve version conflicts.
Other big pain point is that it doesn’t support workspaces or multi-project builds or whatever you want to call them, so where you can have multiple related applications or libraries in the same repo and directly depending on each other, without needing to publish a version of the libraries each time you make a change.
When we started our last big Python project, none of the Python tooling supported workspaces out of the box. Now, there’s Rye, which does so. But yeah, I don’t have experience yet, with how well it works.
Python never had much of a central design team. People mostly just scratched their own itch, so you get lots of different tools that do only a small part each, and aren’t necessarily compatible.
Well, for reasons, I happen to know that this person is a student, who has effectively no experience dealing with real-world codebases.
It’s possible that the LLM produced good results for the small codebases and well-known exercises that they had to deal with so far.
I’m also guessing, they’re learning what a PR is for the first time just now. And then being taught by Microsoft that you can just fire off PRs without a care in the world, like, yeah, how should they know any better?
Yeah, Google likes to guess the language preference based on the IP address, which thankfully never goes wrong.
Tangentially related rant: We had a new contributor open up a pull request today and I gave their changes an initial look to make sure no malicious code is included.
I couldn’t see anything wrong with it. The PR was certainly a bit short, but the task they tackled was pretty much a matter of either it works or it doesn’t. And I figured, if they open a PR, they’ll have a working solution.
…well, I tell the CI/CD runner to get going and it immediately runs into a compile error. Not an exotic compile error, the person who submitted the PR had never even tried to compile it.
Then it dawned on me. They had included a link to a GitHub Copilot workspace, supposedly just for context.
In reality, they had asked the dumbass LLM to do the change described in the ticket and figured, it would produce a working PR right off the bat. No need to even check it, just let the maintainer do the validation.
In an attempt to give them constructive feedback, I tried to figure out, if this GitHub Copilot workspace thingamabob had a Compile-button that they just forgot to click, so I actually watched Microsoft’s ad video for it.
And sure enough, I saw right then and there, who really was at fault for this abomination of a PR.
The ad showed exactly that. Just chat a bit with the LLM and then directly create a PR. Which, yes, there is a theoretical chance of this possibly making sense, like when rewording the documentation. But for any actual code changes? Fuck no.
So, most sincerely: Fuck you, Microsoft.
Business intelligence is in the context of analytics. It means something very different from “business logic”, in case you’re thinking they’re synonyms…
“Infrastructure as code” is what the strategy is typically called. You use one of the many tools for orchestrating configuration of hosts (Ansible, OpenTofu, Puppet, Saltstack, Chef, etc.). These allow you to provide configuration files and code for setting up your hosts in a central place. This place is typically a Git repo, allowing you to keep track of when which change was made.
Depending on the tool you use, you trigger applying the configuration on your dev PC, or there’s a hosted CI/CD server which automatically rolls out the changes when a new commit is pushed.
I find that difficult. Aside from code reviews, often times your job as a maintainer is:
A required review slows all of these tasks to a crawl. I do agree that the kernel is important enough that it might be worth the trade-off.
But at the same, I do not feel like I could do my (non-kernel) maintainer job without direct commit access…
Sounds like you’ll always have to do this little dance for any string you want to pass through, so I can definitely see how that could become quite annoying.
For not being able to combine serde-derive and cxx FFI on the same struct, there’s a simple trick that can be used for many such situations:
struct CxxThingamabob { ... }
#[derive(Serialize, Deserialize)]
#[serde(transparent)]
struct SerializableCxxThingamabob(CxxThingamabob);
That just moves the Serde implementation to a different struct, so that you can choose which one you want by either wrapping or unwrapping it.
Probably not going to happen. I will say that it’s less bad than you might think, because there is more-or-less an unofficial extended stdlib, i.e. high-quality, widely used libraries which are maintained by people in the Rust team.
But yeah, I’m involved in a somewhat larger project and we’ve cracked 1000 transitive dependencies a few weeks ago, and I can tell you for free that I don’t personally know the maintainers of all of those.
If this was more of a security-critical project, there’s probably a dozen or so direct dependencies that we would have implemented ourselves instead.
I’m a bit surprised that it’s supposed to be this bad, given that Mozilla uses it in Firefox and there’s the whole CXX toolchain.
Granted, Rust was not designed from the ground up to be C+±like, but I’m really not sure that’s a good idea anyways.
Wanting bug-free programs without wanting functional programming paradigms is a bit like:
Of course, if we’re able to migrate a lot of old C++ codebases to a slightly better standard relatively easily, then that is still something…
The inherent problem with this kind of solution is that if you don’t break backwards compatibility, you don’t get rid off all the insecure code.
And if you do break backwards compatibility, there’s not much reason to stick to C++ rather than going for Rust with its established ecosystem…
I’m not sure, C# wants to hear the response to this…
Well, to quote the article’s response to that:
Nice idea in theory, but these built in blockers pale in comparison to the real thing.
Vivaldi users point out that the built in blocker is noticably worse than uBlock Origin, with some guessing that Vivaldi doesn’t fully support uBlock Origin filterlists (Vivaldi is closed source, so it’s harder for users to investigate).
Recently saw a streamer talk about this, and not a tech streamer or anything. She didn’t get quite all the details right, but I think that’s the massive fuckup of Google here: If you don’t get the details right, your interpretation is simply that Google is killing ad blocking.
She’s actually using Opera and understands that it’s Chromium-based, so her takeaway was that Firefox might be the only option left.
In that vein, she also talked about how Google Search is now just ads and bad results, and when someone mentioned DuckDuckGo, she responded that she’s been genuinely been thinking about switching.
Like, damn, I know Google is big and this alone won’t kill them. But her talking about it still felt like the initial drop before the rollercoaster goes downhill. I don’t think, I’ve heard a non-techie talk so negatively about Google, possibly ever…
What helped me a lot with pushing deeper down into the language innards is to have people to explain things to.
Last week, for example, one of our students asked what closures are.
Explaining that was no problem, I was also able to differentiate them from function pointers, but then she asked what in Rust the traits/interfaces Fn
, FnMut
and FnOnce
did (which are implemented by different closures).
And yep, she struck right into a blank spot of my knowledge with that.
I have enough of an idea of them to just fill in something and let the compiler tell me off when I did it wrong.
Even when designing an API, I’ve worked out that you should start with an FnOnce
and only progress to FnMut
, then Fn
and then a function pointer, as the compiler shouts at you (basically they’re more specific and more restrictive for what the implementer of the closure is allowed to do).
But yeah, these rules of thumb just don’t suffice for an actual explanation.
I couldn’t tell you why these different traits are necessary or what the precise differences are.
So, we’ve been learning about them together and I have a much better understanding now.
Even in terms of closures in general (independent of the language), where I thought I had a pretty good idea, I had the epiphany that closures have two ways of providing parameters, one for the implementer (captured out of the context) and one for the caller (parameter list).
Obviously, I was aware of that on some level, as I had been using it plenty times, but I never had as clear of an idea of it before.
Not nipple milk then? I’m not sure that’s better…
Yup. Commit messages are often shown in truncated form, which is when the dot helps to know whether you’re seeing the whole message or not.
Well, and every so often, I’ll use the commit message to document why a change was made, which requires multiple sentences. Then the dot just serves its usual purpose of separating sentences.
The post was ultimately just an ad for an Indian tech company. And yeah, looked very ChatGPT, so not worth reading in my opinion.