As a Java engineer in the web development industry for several years now, having heard multiple times that X is good because of SOLID principles or Y is bad because it breaks SOLID principles, and having to memorize the “good” ways to do everything before an interview etc, I find it harder and harder to do when I really start to dive into the real reason I’m doing something in a particular way.

One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

What are your opinions on this?

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    10
    ·
    6 days ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    Sounds like you’ve learned the answer!

    Virtual all programming principles like that should never be applied blindly in all situations. You basically need to develop taste through experience… and caring about code quality (lots of people have experience but don’t give a shit what they’re excreting).

    Stuff like DRY and SOLID are guidelines not rules.

  • douglasg14b@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    6 days ago

    The principles are perfectly fine. It’s the mindless following of them that’s the problem.

    Your take is the same take I see with every new generation of software engineers discovering that things like principles, patterns and ideas have nuance to them. Who when they see someone applying a particular pattern without nuance think that is what the pattern means.

    • XM34@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      And then you have clean code. Clean code is like cooking with California Reapers. Some people swear on it and a tiny bit of Clean Code in your code base has never hurt anyone. But use it as much as the book recommends and I’m gonna vomit all day long.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 days ago

    YAGNI ("you aren’t/ain’t gonna need it) is my response to making an interface for every single class. If and when we need one, we can extract an interface out. An exception to this is if I’m writing code that another team will use (as opposed to a web API) but like 99% of code I write only my team ever uses and doesn’t have any down stream dependencies.

  • deathmetal27@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    6 days ago

    One example is creating an interface for every goddamn class I make because of “loose coupling” when in reality none of these classes are ever going to have an alternative implementation.

    Not only loose coupling but also performance reasons. When you initialise a class as it’s interface, the size of the method references you load on the method area of the memory (which doesn’t get garbage collected BTW) is reduced.

    Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    In my experience, not following SOLID principles makes your application an unmaintainable mess in roughly one year. Though SOLID needs to be coupled with better modularity to be effective.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    I’m making a separate comment for this, but people saying “Liskov substitution principle” instead of “Behavioral subtyping” generally seem more interested in finding a set of rules to follow rather than exploring what makes those rules useful. (Context, the L in solid is “Liskov substitution principle.”) Barbra Liskov herself has said that the proper name for it would be behavioral subtyping.

    In an interview in 2016, Liskov herself explains that what she presented in her keynote address was an “informal rule”, that Jeannette Wing later proposed that they “try to figure out precisely what this means”, which led to their joint publication [A behavioral notion of subtyping], and indeed that “technically, it’s called behavioral subtyping”.[5] During the interview, she does not use substitution terminology to discuss the concepts.

    You can watch the video interview here. It’s less than five minutes. https://youtu.be/-Z-17h3jG0A

  • Corbin@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    Java is bad but object-based message-passing environments are good. Classes are bad, prototypes are also bad, and mixins are unsound. That all said, you’ve not understood SOLID yet! S and O say that just because one class is Turing-complete (with general recursion, calling itself) does not mean that one class is the optimal design; they can be seen as opinions rather than hard rules. L is literally a theorem of any non-shitty type system; the fact that it fails in Java should be seen as a fault of Java. I is merely the idea that a class doesn’t have to implement every interface or be coercible to any type; that is, there can be non-printable non-callable non-serializable objects. Finally, D is merely a consequence of objects not being functions; when we want to apply a functionf to a value x but both are actually objects, both f.call(x) and x.getCalled(f) open a new stack frame with f and x local, and all of the details are encapsulation details.

    So, 40%, maybe? S really is not that unreasonable on its own; it reminds me of a classic movie moment from “Meet the Parents” about how a suitcase manufacturer may have produced more than one suitcase. We do intend to allocate more than one object in the course of operating the system! But also it perhaps goes too far in encouraging folks to break up objects that are fine as-is. O makes a lot of sense from the perspective that code is sometimes write-once immutable such that a new version of a package can add new classes to a system but cannot change existing classes. Outside of that perspective, it’s not at all helpful, because sometimes it really does make sense to refactor a codebase in order to more efficiently use some improved interface.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    7 days ago

    Also the more I get into languages like Rust, the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    There are definitely occasions when these principles do make sense, especially in an OOP environment, and they can also make some design patterns really satisfying and easy.

    Congratulations. This is where you wind up, long after learning the basics and start interacting with lots of code in the wild. You are not alone.

    Implementing things with pragmatism, when it comes to conventions and design patterns, is how it’s really done.

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    7 days ago

    99% of code is too complicated for what it does because of principles like SOLID, and because of OOP.

    Algorithms can be complex, but the way a system is put together should never be complicated. Computers are incredibly stupid, and will always perform better on linear code that batches similar operations together, which is not so coincidentally also what we understand best.

    Our main issue in this industry is not premature optimisation anymore, but premature and excessive abstraction.

    • douglasg14b@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 days ago

      This is crazy misattribution.

      99% of code is too complicated because of inexperienced programmers making it too complicated. Not because of the principles that they mislabel and misunderstood.

      Just because I forcefully and incorrectly apply a particular pattern to a problem it is not suited to solve for doesn’t mean the pattern is the problem. In this case, I, the developer, am the problem.

      Everything has nuance and you should only use in your project the things that make sense for the problems you face.

      Crowbaring a solution to a problem a project isn’t dealing with into that project is going to lead to pain

      why this isn’t a predictable outcome baffles me. And why attribution for the problem goes to the pattern that was misapplied baffles me even further.

      • termaxima@slrpnk.net
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        No. These principles are supposedly designed to help those inexperienced programmers, but in my experience, they tend to do the opposite.

        The rules are too complicated, and of dubious usefulness at best. Inexperienced programmers really need to be taught to keep things radically simple, and I don’t mean “single responsibility” or “short functions”.

        I mean “stop trying to be clever”.

  • HaraldvonBlauzahn@feddit.org
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    I think that OOP is most useful in two domains: Device drivers and graphical user interfaces. The Linux kernel is object-oriented.

    OOP might also be useful in data structures. But you can as well think about them as “data structures with operations that keep invariants” (which is an older concept than OOP).

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    7 days ago

    OOP is good in a vacuum. In real life, where deadlines apply, you’re going to get some ugly stuff under the hood, even though the app or system seems to work.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    46
    ·
    edit-2
    8 days ago

    If it makes the code easier to maintain it’s good. If it doesn’t make the code easier to maintain it is bad.

    Making interfaces for everything, or making getters and setters for everything, just in case you change something in the future makes the code harder to maintain.

    This might make sense for a library, but it doesn’t make sense for application code that you can refactor at will. Even if you do have to change something and it means a refactor that touches a lot, it’ll still be a lot less work than bloating the entire codebase with needless indirections every day.

    • termaxima@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Getters and setters are superfluous in most cases, because you do not actually want to hide complexity from your users.

      To use the usual trivial example : if you change your circle’s circumference from a property to a function, I need to know ! You just replaced a memory access with some arithmetic ; depending in my behaviour as a user this could be either great or really bad for my performance.

    • ExLisperA
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      Exactly this. And to know what code is easy to maintain you have to see how couple of projects evolve over time. Your perspective on this changes as you gain experience.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      8 days ago

      I remember the recommendation to use a typedef (or #define 😱) for integers, like INT32.

      If you like recompile it on a weird CPU or something I guess. What a stupid idea. At least where I worked it was dumb, if someone knows any benefits I’d gladly hear it!

      • SilverShark@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        8 days ago

        We had it because we needed to compile for Windows and Linux on both 32 and 64 bit processors. So we defined all our Int32, Int64, uint32, uint64 and so on. There were a bunch of these definitions within the core header file with #ifndef and such.

        • Valmond@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          8 days ago

          But you can use 64 bits int on a 32 bits linux, and vice versa. I never understood the benefits from tagging the stuff. You gotta go so far back in time where an int isn’t compiled to a 32 bit signed int too. There were also already long long and size_t… why make new ones?

          Readability maybe?

          • Consti@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            8 days ago

            Very often you need to choose a type based on the data it needs to hold. If you know you’ll need to store numbers of a certain size, use an integer type that can actually hold it, don’t make it dependent on a platform definition. Always using int can lead to really insidious bugs where a function may work on one platform and not on another due to overfloe

            • Valmond@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              8 days ago

              Show me one.

              I mean I have worked on 16bits platforms, but nobody would use that code straight out of the box on some other incompatible platform, it doesn’t even make sense.

              • Consti@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                7 days ago

                Basically anything low level. When you need a byte, you also don’t use a int, you use a uint8_t (reminder that char is actually not defined to be signed or unsigned, “Plain char may be signed or unsigned; this depends on the compiler, the machine in use, and its operating system”). Any time you need to interact with another system, like hardware or networking, it is incredibly important to know how many bits the other side uses to avoid mismatching.

                For purely the size of an int, the most famous example is the Ariane 5 Spaceship Launch, there an integer overflow crashed the space ship. OWASP (the Open Worldwide Application Security Project) lists integer overflows as a security concern, though not ranked very highly, since it only causes problems when combined with buffer accesses (using user input with some arithmetic operation that may overflow into unexpected ranges).

                • Valmond@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 days ago

                  And the byte wasn’t obliged to have 8 bits.

                  Nice example, but I’d say it’skind of niche 😁 makes me remember the underflow in a video game, making the most peaceful npc becoming a warmongering lunatic. But that would not have been helped because of defines.

          • SilverShark@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            It was a while ago indeed, and readability does play a big role. Also, it becomes easier to just type it out. Of course auto complete helps, but it’s just easier.

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    8 days ago

    Yes OOP and all the patterns are more than often bullshit. Java is especially well known for that. “Enterprise Java” is a well known meme.

    The patterns and principles aren’t useless. It’s just that in practice most of the time they’re used as hammers even when there’s no nail in sight.

        • iii@mander.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          8 days ago

          Can I bring my own AbstractSingletonBeanFactoryManager? Perhaps through some at runtime dependency injection? Is there a RuntimePluginDiscoveryAndInjectorInterface I can implement for my AbstractSingletonBeanFactoryManager?

    • SinTan1729@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 days ago

      As an amateur with some experience in the functional style of programming, anything that does SOLID seems so unreadable to me. Everything is scattered, and it just doesn’t feel natural. I feel like you need to know how things are named, and what the whole thing looks like before anything makes any sense. I thought SOLID is supposed to make code more local. But at least to my eyes, it makes everything a tangled mess.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        Especially in Java, it relies extremely heavy on the IDE, to make sense to me.

        If you’re minimalist, like me, and prefer text editor to be seperate from linter, compiler, linker, it’s not pheasable. Because everything is so verbose, spread out, coupled based on convention.

        So when I do work in Java, I reluctantly bring out Eclipse. It just doesn’t make any sense without.

        • SinTan1729@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          8 days ago

          Yeah, same. I like to code in Neovim, and OOP just doesn’t make any sense in there. Fortunately, I don’t have to code in Java often. I had to install Android Studio just because I needed to make a small bugfix in an app, it was so annoying. The fix itself was easy, but I had to spend around an hour trying to figure out where the relevant code exactly is.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    7 days ago

    I think the general path to enlightenment looks like this (in order of experience):

    1. Learn about patterns and try to apply all of them all the time
    2. Don’t use any patterns ever, and just go with a “lightweight architecture”
    3. Realize that both extremes are wrong, and focus on finding appropriate middle ground in each situation using your past experiences (aka, be an engineer rather than a code monkey)

    Eventually, you’ll end up “rediscovering” some parts of SOLID on your own, applying them appropriately, and not even realize it.

    Generally, the larger the code base and/or team (which are usually correlated), the more that strict patterns and “best practices” can have a positive impact. Sometimes you need them because those patterns help wrangle complexity, other times it’s because they help limit the amount of damage incompetent teammates can do.

    But regardless, I want to point something out:

    the more these doubts are increasing and leading me to believe that most of it is just dogma that has gone far beyond its initial motivations and goals and is now just a mindless OOP circlejerk.

    This attitude is a problem. It’s an attitude of ignorance, and it’s an easy hole to fall into, but difficult to get out of. Nobody is “circlejerking OOP”. You’re making up a strawman to disregard something you failed at (eg successful application of SOLID principles). Instead, perform some introspection and try to analyze why you didn’t like it without emotional language. Imagine you’re writing a postmortem for an audience of colleagues.

    I’m not saying to use SOLID principles, but drop that attitude. You don’t want to end up like those annoying guys who discovered their first native programming language, followed a Vulkan tutorial, and now act like they’re on the forefront of human endeavor because they imported a GLTF model into their “game engine” using assimp…

    A better attitude will make you a better engineer in the long run :)

    • iByteABit@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      I get your points and agree, though my “attitude” is mostly a response to a similar amount of attitude deployed by the likes of developers who swear by one principle to the death and when you doubt an extreme usage of these principles they come at you by throwing acronyms instead of providing any logical arguments as to why you should always create an interface for everything

    • marzhall@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      7 days ago

      I dunno, I’ve definitely rolled into “factory factory” codebases that are abstraction astronauts just going to town over classes that only have one real implementation over a decade and seen how far the cargo culting can go.

      It’s the old saying “give a developer a tool, they’ll find a way to use it.” Having a distataste for mindless dogmatic application of patterns is healthy for a dev in my mind.

    • Gonzako@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      7 days ago

      You’ve described my journey to a tea. You eventually find your middle ground which is sadly not universal and thus, we shall ever fight the stack overflow wars.