• 1 Post
  • 92 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • This really reads to me like the perspective of a business major whose only concept of productivity is about what looks good on paper. He seems to think it’s a desirable goal for EVERY project to be completed with 0 latency. That’s absurd. If every single incoming requirement is a “top priority, this needs to go out as soon as possible” that’s a management failure. They either need to ACTUALLY prioritize requirements properly, or they need to bring in more people.

    For the Chuck and Patty example, he describes Chuck finishing a task and sending it to Patty for review, and Patty not picking it up because she’s “busy.” Busy with what? If this task is the higher priority, why is she not switching to it as soon as it’s ready? Do either Chuck or Patty not know that this task is the current highest priority? Sounds like management failure. Is there not a system in place (whether automatic or not) for notifying people when high priority tasks are assigned? Also sounds like management failure. Is Patty just incapable of switching tasks within 30-60 minutes? She needs to work on her organization skills, or that management isn’t providing sufficient tooling for multitasking.

    When a top-priority “this needs to go out ASAP” task is in play on my team, I’m either working on it, or I know it’s coming my way soon, and who it’s coming from, because my Project Lead has already coordinated that among all of us. Because that’s her job.

    From the article…

    Project A should take around 2 weeks

    Project B should take around 2 weeks

    That’s 4 weeks to complete them both

    But only if they’re done in sequence!

    If you try to do them at the same time, with the same team, don’t be surprised if it ends up taking 6 weeks!

    Nonsense. If these are both top priorities, and the team has proper leadership, (and the 2 week estimate is actually accurate) 4 weeks is entirely achievable. If these are not top priorities, and the team has other work as well, then yeah, no shit it might be 6 weeks. You can’t just ignore the 2 weeks from Project C if it’s prioritized similarly to A and B. If A and B NEED to go out in 4 weeks, then prioritize them higher, and coordinate your team to make that happen.






  • It’s the capability of a program to “reflect” upon itself, I.E. to inspect and understand its own code.

    As an example, In C# you can write a class…

    public class MyClass
    {
        public void MyMethod()
        {
            ...
        }
    }
    

    …and you can create an instance of it, and use it, like this…

    var myClass = new MyClass();
    myClass.MyMethod();
    

    Simple enough, nothing we haven’t all seen before.

    But you can do the same thing with reflection, as such…

    var type = System.Reflection.Assembly.GetExecutingAssembly()
        .GetType("MyClass");
    
    var constructor = type.GetConstructor(Array.Empty<Type>());
    
    var instance = constructor.Invoke(Array.Empty<Object>());
    
    var method = type.GetMethod("MyMethod");
    
    var delegate = method.CreateDelegate(typeof(Action), instance);
    
    delegate.DynamicInvoke(Array.Empty<object>());
    

    Obnoxious and verbose and tossing basically all type safety out the window, but it does enable some pretty crazy interesting things. Like self-discovery and dynamic loading of plugins, or self-configuration of apps. Also often useful when messing with generics. I could dig up some practical use-cases, if you’re curious.



  • #4 for me.

    Proper HTTP Status code for semantic identification. Duplicating that in the response body would be silly.

    User-friendly “message” value for the lazy, who just wanna toss that up to the user. Also, ideally, this would be what a dev looks at in logs for troubelshooting.

    Tightly-controlled unqiue identifier “code” for the error, allowing consumers to build their own contextual error handling or reporting on top of this system. Also, allows for more-detailed types of errors to be identified and given specific handling and recovery logic, beyond just the status code. Like, sure, there’s probably not gonna be multiple sub-types of 403 error, but there may be a bunch of different useful sub-types for a 400 on a form submission.







  • Main difference there being that switching cities means probably switching ISPs. You can absolutely carry over your IP address when you move between the same provider, if that’s part of your service plan, and that may well happen with some ISPs even without it being part of your plan. There just isn’t really much of a need for people to carry a static IP, except for some businesses, and I’d say the main reason is that people don’t visit websites by memorizing and typing in an IP. They do memorize and type in phone numbers.







  • I think it’s a fallacy to say that you can or should build an application layer that’s completely DBMS agnostic. Even if you are very careful to only write SQL queries with features that are part of the official SQL standard, you’re still coupled to your particular DBMS’s internal implementations for query compilation, planning, optimization, etc. At enterprise scale, there’s still going to be plenty of queries that suddenly perform like crap, after a DBMS swap.

    In my mind, standardization for things like ODBC or Hibernate or Entity Framework or whatever else isn’t meant to abstract away the underlying DBMS, it’s meant to promote compatibility.

    Not to mention that you’re tying your own hands by locking yourself out of non-standard DBMS features, that you could be REALLY useful to you, if you have the right use-cases. JSON generation and indexing is the big one that comes to mind. Also, geospatial data tables.

    For context, my professional work for the past 6 years is an Oracle/.NET/Browser application, and we are HEAVILY invested in Oracle. Most notably, we do a LOT of ETL, and that all runs exclusively in the DBMS itself, in PL/SQL procedures orchestratedbbybthe Oracle job scheduler. Attempting to do this kind of data manipulation by round-tripping it into .NET code would make things significantly worse.

    So, my opinion could definitely be a result of what’s been normalized for me, in my day job. But I’ve also had a few other collaborative side projects where I think the “don’t try and abstract away the DBMS” advice holds true.