It's unclear to me what the author thinks OOP is, and what he thinks we are replacing it with. The main point of OOP to me is hiding internal state. So OOP is great for user-interface, because there's all kinds of state there (not just the model, but the internal state of the UI element, like the scroll position of a list or the selection range of a text edit). Microservices, in fact, could be considered "network objects" and a microservice framework as network OOP. The problem there is that normal making a function call is straight-forward. The call produce a failure result, but the call actually happens. On the network, the call might not happen, and you might not be aware that the call cannot and will not happen for some seconds. This is not likely to simplify your code...
OOP can be just about structuring code, like the Java OOP fundamentalism, where even a function must be a Runnable object (unless it's changed since Oracle took over). If there's anything that is not an object, it's a function!
Some things are not well-suited to OOP, like linear processing of information in a server. I suspect this is where the FP excitement came from. In transforming information and passing it around, no state is needed or wanted, and immutability is helpful. FP in a UI or a game is not so fun (witness all the hooks in React, which in anything complicated is difficult to follow), since both of those require considerable internal state.
Algorithms are a sort of middle ground. Some algorithms require keeping track of a bunch of things, others more or less just transform the inputs. OOP (internal to the algorithm) can make the former much clearer, while it is unhelpful for that latter.
> It's unclear to me what the author thinks OOP is, and what he thinks we are replacing it with.
The author is complaining about bloat.
The thing is, in this case, the bloat has highly tangible costs: Spreading an application across multiple computers unnecessarily adds both operation costs and development costs.
The essence of OOP to me is message-passing, which implies (hidden) local mutable state (there must be local state if a message can change future behavior). (Really, actor-based languages are purer expressions of this ideal than conventional OOP languages, including Smalltalk.) However, encapsulation is not at all unique to OOP; e.g. abstract data types are fully encapsulated but do not require all function calls to look like message passing.
I think that "OOP" is an incredibly overloaded term which makes it difficult to speak about intelligibly or usefully at this point.
Are we talking about using classes at all? Are we arguing about Monoliths vs [Micro]services?
I don't really think about "OOP" very often. I also don't think about microservices. What some people seem to be talking about when they say they use "OOP" seems strange and foreign to me, and I agree we shouldn't do it like that. But what _other_ people mean by "OOP" when they say they don't use it seems entirely reasonable and sane to me.
I think in terms of language features and patterns which actually mean something. OOP doesn't really mean anything to me, given that it doesn't seem to mean anything consistent in the industry.
Of course I work with classes, inheritance, interfaces, overloading, whatever quite frequently. Sometimes, I eschew their usage because the situation doesn't call for it or because I am working in something which also eschews such things.
What I don't do is care about "OOP" is a concept in and of itself.
Well.. I don't understand how you can read a confused and muddled article by someone who doesn't want to know the difference between JavaTM and one of its notable choices in the many dimensions of language choices and not wish to be a little more enlightened as to the difference between hiring an OOP monkey or a VMware jockey to smash some bits about.. The article is like a poster child for taking an hour to learn what your profession is about.
Respectfully, it's not clear to me what you're saying. You're clearly displeased with both the author of the article and with myself, but beyond that, I'm not sure what your thesis is.
We give features of a profession terms so we can refer to them independently and make reasonable discussions that don't confuse people as to which traits we think something that is neither Java nor Python (and so need not match either of them on every dimension) should have.
For example, I hate Java because of OOP, but strong typing can make a lot of bad in a language tolerable. Does the writer of the article agree with me? They don't seem able to understand whether they do.
I think this is some combination of strawman, and subset of all cases. You can lament complications of OOP. You can also lament the complications of docker, kubernetes, HTTP APIs etc. These aren't mutually exclusive, and they don't span the breadth of programming techniques. I prefer avoiding all of these.
Anecdotally, I've replaced OOP with plain data structures and functions.
>> _Anecdotally, I've replaced OOP with plain data structures and functions._
I think this is why FP is becoming more popular these days but I'm not sure some people get why.
The problem with OOP is you take a data set and spread it all over a 'system' of stateful (mutable) objects and wonder why it doesn't/can't all fit back into place when you need it to. OOP looks great on paper and I love the premise but...
With FP you take a data set and pass it through a pipeline of functions that give back the same dataset or you take a part of that data out, work on it and put it straight back. All your state lives in one place, mutable changes are performed at the edges, not internally somewhere in a mass of 'instances'.
I think micro services et al try to alleviate this by spreading the OO system's instances into silos but that just moves the problems elsewhere.
IME microservices solve engineering process problems (i.e. synchronization, enforcement of interface boundaries, build and test scale issues), not technical problems in the product.
I agree, very true when used for purposes as you noted. I guess my point was more about using them as a way solve the underlying problems a large OO system can develop.
Microservices enforce you to package data sets for transport, it's very functional if you only take the data and transport into consideration, the mess can still happen within the microservice though.
>> Anecdotally, I've replaced OOP with plain data structures and functions.
Agreed. I think objects/classes (C++) should be for software subsystems and not so much for user data. Programs manipulate data, not the other way around - polymorphism and overloading can be bad for performance.
Objects/classes work best for datastructures (IMO).
Outside that usecase, I think polymorphism via inheritance is generally a mistake.
Programs manipulate data and datastructures organize that data in a way that's algorithmically efficient.
The main issue with OOP is that without a very clear abstraction, it can be almost impossible to reason about code as you end up needing to know too much about the hierarchy of code to correctly understand what will happen next. As it turns out, most programmers are pretty bad at managing that abstraction boundary.
OOP is like alcohol: enjoyable in moderation but dangerous in excess.
In moderation, an object is a data structure with associated functions (methods) that acts as a kind of namespace. If your data structure and functions are separate, you might start having function name collisions.
I'm not really qualified to talk about either topic at length, but my impression is that the Microservice crowd is kind of a different group than the anti-OOP crowd.
As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.
I get not wanting crazy multi tier class inheritance, that seems like a disaster.
In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema. Turns out - it's super hard without an ORM or at least some kind of object layer. I got halfway through it before I realized what I was making was actually an ORM.
Please feel free to let me know why this is all an awful idea, or why I'm doing it wrong, I genuinely am just winging it.
OOP is not simplifying CRUD or DB ops because you want to batch.
You don’t want lazy loading. You don’t want to load 1 thing. You don’t want to update 1 thing.
You want to actually exploit RETURNING and not have the transaction fail on a single element in batch.
If you care about performance you do not want ORM at all.
You want to load the response buffer and not hydrate objects.
If you ignore ORM you will realize CRUD is easy. You could even batch the actual HTTP requests instead of processing them 1 by 1. Try to do that with a bunch of objects.
I would personally never use ORM or dependency injection (toposort+annotations). Both approaches in my opinion do not solve hard problems and in most cases you don’t even want to have the problems they solve.
It's fashionable to dunk on OOP (because most examples - like employee being a subtype of person - are stupid) and ORM (because yes you need to hand write queries of any real complexity).
But there's a reason large projects rely on them. When used properly they are powerful, useful, time-saving and complexity-reducing abstractions.
Code hipsters always push new techniques and disparage the old ones, then eventually realise that there were good reasons for the status quo.
Case in point the arrival of NoSQL and wild uptake of MongoDB and the like last decade. Today people have re-learned the value of the R part of RDBMS.
Large projects benefited from OOP because large projects need abstraction and modularization. But OOP is not unique in providing those benefits, and it includes some constructs (e.g. inheritance, strictly-dynamic polymorphism) that have proven harmful over time.
It may be extreme, but it's very common. It's probably the single most common argument used against OOP. If you drop out inheritance, most of the complaints about OO fall away.
Almost all languages have some sort of object representation, right? Classes with their own behavior, DTOs, records, structs, etc.,. What language are you working in? If you're coupled to a specific database provider anyway there's usually a system table you can query to get your list of tables, column names, etc., so you could almost just use one data source and only need to deal with its structure to provide all your endpoints (not really recommending this approach).
> As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.
I've heard this a lot in my career. I can agree that most object-oriented languages have had to do a lot of work to make CRUD and database operations easy to do, because they are common needs. ORM libraries are common because mapping between objects and relations (SQL) is a common need.
It doesn't necessarily mean that object-oriented programming is the best for CRUD because ORMs exist. You can find just as many complaints that ORMs obfuscate how database operations really work/think. The reason you need to map from the relational world to the object world is because they are different worlds. SQL is not an object-oriented language and doesn't follow object-oriented ideals. (At least, not out of the box as a standardized language; many practical database systems have object-oriented underpinnings and/or present object-oriented scripting language extensions to SQL.)
> it's super hard without an ORM or at least some kind of object layer
This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.
It might also be confusing the concepts of "data structure" and "object". Which most object-oriented languages generally do, and have good reason to. A good OOP language wants every data structure to be an object.
The functional programming world still makes heavy use of data structures. It's hard to program in any language without data structures. FP CRUD can be as simple as four functions `create`, 'read`, `update`, and `delete`, but still needs some mapping to data structures/data types. That may still sound object-oriented if you are used to thinking of all data structures as "objects". But beyond that, it should still sound relatively "easy" from an FP perspective: CRUD is just functions that take data structures and make database operations or make database operations and return data structures.
A difference between FP and OOP's view of data structures is where "behaviors" live. An object is a data structure with "attached" behaviors which often modify a data structure in place. FP generally relies on functions that take one data structure and return the next data structure. If you aren't using much in the way of class inheritance, if your "objects" out of your ORM have few methods of their own, you may be closer to FP than you think. (The boundary is slippery.)
> This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.
I mean, I think this is likely the case. So, I tried this, for example in Go, which is not really a proper functional programming language as I understand it, but is definitely not object-oriented.
So for my use case, I wanted to be able to take a database schema and programmatically create a set of CRUD endpoints in a TUI. Based on my pretty limited knowledge of Go, I found this to be pretty challenging. At first, I built it with Soda / Pop, the ORM from Buffalo framework. It worked fairly well.
Then I got frustrated with using Soda outside Buffalo, and yoinked the ORM to try and remove a layer. Using vanilla Go, it seems like the accepted pattern is that you create separate functions for C R U and D, as you referred to. However, it seems like this is pretty challenging to do programmatically, particularly without sophisticated metaprogramming, and even if you had a language which had complex macros or something, that is objectively significantly harder than object.get() and object.save().
Finally, I put GORM back in, and it worked fine. And GORM is a nice library, even though I think having an ORM is not the "Go" way of doing things in the first place. But also, Gorm is basically using function magic to feel like OOP. And maybe the problem with this idea is that it's not "Proper Go" to make a thing like this, it would be better to just code it. There's an admin panel in the Pagoda go stack which relies on ent ORM to function as well. I can only assume the developer motivations but I assume they are along the same lines as my experience.
I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.
In the real world, in business logic, objects do things. They aren't just data structures.
To summarize, CRUD seems pretty easy in any language, programmatically doing CRUD seems super hard in FP. Classes make that a lot easier. Maybe we shouldn't do that ever, and that's fine, but I'm a Django guy, I love my admin panels. Just my experience.
> I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.
Methods at all make a language OOP. Class inheritance is almost a side quest in OOP. (There are OOP languages with no class inheritance.)
Go seems quite object-oriented to me. I would definitely assume it is easier to use an ORM in Go than to not use an ORM.
I don't use a lot of Go, so I can't speak to anything about what the "proper Go" way of doing things is.
I could try to describe some of the non-ORM, functional programming ways of working with databases as I've seen in languages like F#, Haskell, or Lisp, but I'm not sure how helpful that would be to show that CRUD is not "super hard" in FP especially because you won't be familiar with those languages.
The thing I'm mostly picking up from your post here is that you like OOP and are comfortable with it, and that's great. Use what you like and use what you are comfortable with. OOP is great in that a lot of people also like it and feel comfortable with it.
I think that's fair, and I generally prefer a lighter stack for CRUD, but I still love Django and Rails. Maybe just having "objects" is not enough to qualify as OOP but for many use cases, the convenience offered by "Batteries Included" is worth the trade off in "overengineering".
If I have to build an app, I'm going for rails. If I'm building a back end, I'm reaching for Go. If I need to integrate with Python libraries, Django is great.
But ask me again when I get to the other side of some OCAML projects
Even in video games, I avoid inheritance, I always much prefer composition. Build a complex object from many small objects, then vary behavior with parameters rather than deriving a child class and overriding methods.
Maybe I am not exactly what you mentioned, but I do feel OOP set us back about a decade or two and do think the general concept of microservices is a good idea. But maybe to your point, these beliefs are completely orthagonal to one another, and why they are mentioned as being related baffled me. To be honest the whole post baffled me and I am disappointed I can not downvote the submission. Anyway more to your topic- OOP in the early 2000s was put on this massive pedestal and trying to point out its flaws would often get you chastised or shunned, and labeled that you just didn't get OOP and such. But the object hierarchies often became their own source of inflexibility, and shoehorning something new into them could often be very difficult and often involve an hour or three of debate/meetings on how to best make teh change.
Microservices are more about making very concrete borders between components with an actual network in between them... and really a contract that has to be negotiated across teams. I feel the best thing this did was force a real conversation around the API boundary and contract, monoliths turn to a big ball of mud once a change slips through that passes in an entire object when just a field is needed, and after a few of these now everything is fairly tightly coupled- modern practices with PRs could prevent a lot of this, but there is still a lot of rubber stamping going on and they don't catch everything. Objects themselves are fine ideas, and I think OOP is great when you focus on composition over inheritance, and bonus points if the objects map cleanly into a relational database schema- once you are starting getting inheritance hierarchies, they often do not. If I had to guess, your experience with OOP is mostly using ORMs where you define the data and it spits out a table for you and some accessor methods, and that works... until it doesn't. At a certain level of complexity the ORM falls apart, and what I have seen in nearly every place I have worked at- is that at some point some innocuous change gets included and now all of a sudden a query does not use an index properly, and it works fine in dev, but then you push it to prod and the DB lights on fire and its really difficult to understand what happened. The style of programming you are talking about would be derided by some old heads as "C with objects" and not "really" OOP. But I do think you are onto something by taking the best parts and avoiding the bad.
"Micro" services aren't great when they are taken to their utmost tiny size, but the idea of a problem domain being well constrained into a deployable unit usually leads to better long term outcomes than a monolith, though its also very true that for under $10k you can get 32 cores of xeons and about 256 gigs of ram, and unless you are building something with intense compute requirements, that is going to get you a VERY long way in terms of concurrent users.
From one rambler to another: I’m not sure who or what this aimed at, as it goes all over the place.
Cloud: Separating resources from what gets deployed is a classic separation of concerns.
I don’t miss the days where I had to negotiate with the IT team on hardware, what gets run, and so on.
Personally, I believe the next evolution is a rebalkanization into private clouds. Mid-to-large companies have zero reason to tie their entire computing and expose information to hosting third parties.
OpenAPI: The industry went through a number of false starts on formal remoting calls (corba, dcom, soap). Those days sucked.
The RESTful APIs caught on, and of course at some point, the need for a formal contract was recognized.
But note how decoupled it is from the underlying stack: It forces the engineers to think about the contract as a separate concern.
The problem here is how fragile the web protocol and security actually is, but the past alternatives offer no solution here.
No, "we" are not replacing OOP with something worse. "We" are replacing layers of stupid shit that got layered on top of, and associated with OOP, with different renderings of the same stupid shit.
I have been programming since 1967. Early in my college days, when I was programming in FORTRAN and ALGOL-W, I came across structured programming. The core idea was that a language should provide direct support for frequently used patterns. Implementing what we now call while loops using IFs and GOTOs? How about adding a while loop to the language itself? And while we're at it, GOTO is never a good idea, don't use it even if your language provides it.
Then there were Abstract Datatypes, which provided my first encounter with the idea that the interface to an ADT was what you should program with, and that the implementation behind that interface was a separate (and maybe even inaccessible) thing. The canonical example of the day was a stack. You have PUSH and POP at the interface, and the implementation could be a linked list, or an array, or a circular array, or something else.
And then the next step in that evolution, a few years later, was OOP. The idea was not that big a step from ADTs and structured programming. Here are some common patterns (modularization, encapsulation, inheritance), and some programming language ideas to provide them directly. (As originally conceived, OOP also had a way of objects interacting, through messages. That is certainly not present in all OO languages.)
And that's all folks.
All the glop that was added later -- Factories, FactoryFactories, GoF patterns, services, microservices -- that's not OOP as originally proposed. A bunch of often questionable ideas were expressed using OO, but they were not part of OO.
The OOP hatred has always been bizarre to me, and I think mostly motivated by these false associations. The essential OOP ideas are uncontroversial. They are just programming language constructs designed to support programming practices that are pretty widely recognized as good ones, regardless of your language choices. Pick your language, use the OO parts or not, it isn't that big a deal. And if your language doesn't have OO bits, then good programming often involves reimplementing them in a systematic way.
These pro- and anti-OOP discussions, which can get pretty voluminous and heated, seem a lot like religious wars. Look, we can all agree that the Golden Rule is a pretty good idea, regardless of the layers of terrible ideas that get piled onto different religions incorporating that rule.
> Other bright sparks jumped in on the action: what if this separation did not rely on the personal hygiene of the programmers - something that should always be called into question for public health reasons - and was instead enforced by the language? Components might hide their implementation by default and communicate only though a set of public functions, and the language might reject programs that tried to skip around these barricades. How quaint.
Sounds like C.
When was the last time you did OO against a .h file without even needing access to the .c file?
> And so, the process/network boundary naturally became that highest and thickest wall
I've had it both ways. Probably everyone here has. It's difficult to make changes with microservices. You gotta open new routes, and wait for people to start using those routes before you close the old ones. But it's impossible to make changes to a monolith: other teams aren't using your routes, they're using your services and database tables.
Data hiding is just one of the concepts of OOP. Polymorphism is another one.
How that's implemented is another question. You can do OOP in plain C, several libraries kinda did that, like GTK. Other languages tried to support these concepts with less boilerplate, giving rise to classes and such. But OOP is not about language features, it's fundamentally a way of designing software.
Polymorphism is trivial in C and even less restrictive than in other OOP-first languages. The reason why GTK (actually GObject) is so overengineered is not due to classes, but because it allows to create classes and types at runtime.
Many great design patterns have come from OOP and have found their home in functional languages or libraries.
Dependency injection has to be the most successful one, but there's at least another dozen good ideas that came from OO world and has been found to be solid.
What has rarely proven to be a good idea instead is inheritance at behavior level. It's fine for interfaces, but that's it. Same for stateful classes, beyond simple data containers like refs.
You can even have classes in functional programming word, it's irrelevant, it's an implementation detail, what matters is that your computations are pure, and side effects are implemented in an encoded form that can be combined in a pure way (an IO or Effect data type works, but so can a simple lazy function encoding).
I think that the author is confusing OOP with Java. And I agree, Java is great and had most of the things we do now with overbloated infrastructures at least a couple of decades ago. We like to reinvent the wheel, that's what we do, but each time we give it another name. Then we complain about the old good time when everything could run on a Pentium and 64MB RAM.
I think it would be very nice if the author provided citations for their assertions about how OOP was adopted. I was alive and gigging as a coder in the 80s when people had interminable arguments about Structured Programming, unstructured programming, OOP and every now and again LISP or FORTH. None of what the author mentions rings true. Standard interfaces came DECADES before anyone started talking about OOP. Rationalizing standard interfaces was mentioned in mythical man month. Structured Development was all the rage in the early 80s when I started selling 6502 op codes. Half the people I talked to in the 80s insisted C++ WAS OOP while the other half found that quote from Alan Kay who said C++ wasn't what he was thinking of when he invented the term "Object Oriented Programming."
I think the author is correctly picking up on how messy changes in best common practice can be. Also, different communities / verticals convert to the true religion on different schedules. The custom enterprise app guys are WAAAAY different than games programmers. I'm not sure you'll ever get those communities to speak the same language.
> Every call across components acrues failure modes, requires a slow march through (de)serialisation libraries, a long trek through the kernel’s scheduler. A TLB cache invalidation here, a socket poll there. Perhaps a sneaky HTTP request to localhost for desert.
And this has tangible costs, too. I saved more than $10k a month in hosting costs for a small startup by combining a few microservices (hosted on separate VMs) into a single service. The savings in development time by eliminating all of the serialization layers is also appreciable, too.
Microservices are a bad pattern. Function as a service is better, because that architecture tends to assume vertical integration within the function and you can leverage a shared business logic library across an organization. Having service-to-service communication is the thing that makes microservices suck, if you can give people autonomy while having a shared systems language and keeping computation on one system it works ok.
Nah, I’ll agree with parent: they’re objectively bad. They turn what could be IPC into network calls, and because everything uses frameworks and ORMs, it’s all slow as hell.
“We can move faster” (but at the cost of our product being slower).
Microservices are a great solution to the problems you run into at high scale.
But they come at great cost. If you don't actually HAVE the problems they solve, do everything in your power to avoid them. If you can just throw money at larger servers, you should not use microservices.
OOP today is really CBP (class based programming). Its really just imperative code hidden in methods that work on mutable state. Its gross. This is most common in Java, PHP and the likes.
> To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
I had a job interview where the guy asked me something along the lines of do I have experience with OOP or Microservices,or both. So I answered how I like both and feel that MS are Objects at the network layer, the dude cut me off and told me to stick to the question. Absolutely gut wrenching, I was being evaluated by who knows who, in a slot of 30 minutes, maybe his entire job was repeating the same set of questions 16 times per day and he was unironically writing down yes/no next to "Microservices?" "OOP"?
>> So I answered how I like both and feel that MS are Objects at the network layer, the dude cut me off and told me to stick to the question.
OMG! As an interviewer I would have asked you to elaborate on that response. Perfect opportunity to see if and how the candidate thinks and how deep some pocket of understanding goes.
do not iterate the bad feeling of this, the guy was not the level to understand the abstraction both give. while i have felt this sentiment towards things being reinvented, we have to assume some innovation is incremental, and the new thing looks like the old but allows kool new stuff such as networking.
This is so much nonsense. Contracts and interfaces have little to do with OOP. If you don't like inheritance, then talk about OOP. It's fine. Many people, including myself, don't like inheritance. On the other hand, if it's interfaces and contracts that have you bothered, parse-dont-validate to your heart's content. If the domain allows it. Just don't drag unrelated concepts into this particular discussion.
OOP is about messages according to the inventor - messages are about the interface. People get confused with objects which languages without messages don't have (you can get them, but they are not first class in the language)
Even plain objects though, the point isn't the inheritance! The point is to put an interface on the data. Inheritance is sometimes useful because, but there is a reason we keep screaming "prefer composition to inheritance" (even though few listen)
TL;DR: OOP is being replaced by microservices (and even more extreme ideas), which is much worse. Yeah, I have to agree with this author.
I will say that service-oriented architecture does have some advantages, and thus sometimes it's the right choice. Parallelism is pretty free and natural, you can run services on different machines, and that can also give you scalability if you need it. However, in my experience that architecture tends to be used in myriad situations where it clearly isn't needed and is a net negative. I have seen it happen.
There is a kernel of insight here that is pretty valuable. Interfaces are a form of security and is a result of a loss of trust. It's interesting to think about how far trust can be extended before it breaks down.
Nonsense. Network service layer separation solves a different problem than OOP. It doesn't replace it. Services and containers bring features and capabilities that OOP doesn't provide. It's orthogonal.
But I think that's really the point: it gets applied to problems it was clearly not meant to solve. The article doesn't really get into that, but that's how I'm reading it because I've seen it happen. To some programmers, every component of a program is its own service, even when there's no need to have multiple processes. The only tool they have is that hammer, so everything has to be a nail.
It's unclear to me what the author thinks OOP is, and what he thinks we are replacing it with. The main point of OOP to me is hiding internal state. So OOP is great for user-interface, because there's all kinds of state there (not just the model, but the internal state of the UI element, like the scroll position of a list or the selection range of a text edit). Microservices, in fact, could be considered "network objects" and a microservice framework as network OOP. The problem there is that normal making a function call is straight-forward. The call produce a failure result, but the call actually happens. On the network, the call might not happen, and you might not be aware that the call cannot and will not happen for some seconds. This is not likely to simplify your code...
OOP can be just about structuring code, like the Java OOP fundamentalism, where even a function must be a Runnable object (unless it's changed since Oracle took over). If there's anything that is not an object, it's a function!
Some things are not well-suited to OOP, like linear processing of information in a server. I suspect this is where the FP excitement came from. In transforming information and passing it around, no state is needed or wanted, and immutability is helpful. FP in a UI or a game is not so fun (witness all the hooks in React, which in anything complicated is difficult to follow), since both of those require considerable internal state.
Algorithms are a sort of middle ground. Some algorithms require keeping track of a bunch of things, others more or less just transform the inputs. OOP (internal to the algorithm) can make the former much clearer, while it is unhelpful for that latter.
> It's unclear to me what the author thinks OOP is
I rather liked the old post "Object Oriented Programming is an Expensive Disaster that Must End" written over 10 years ago.
https://medium.com/@jacobfriedman/object-oriented-programmin...
Many complained the post was too long, and then debated all kinds of things brought up in the article (such is the way of the internet).
But the one thing I really liked is how it laid out that everyone has a different definition of what OOP is and so it is difficult to talk about.
> It's unclear to me what the author thinks OOP is, and what he thinks we are replacing it with.
The author is complaining about bloat.
The thing is, in this case, the bloat has highly tangible costs: Spreading an application across multiple computers unnecessarily adds both operation costs and development costs.
The essence of OOP to me is message-passing, which implies (hidden) local mutable state (there must be local state if a message can change future behavior). (Really, actor-based languages are purer expressions of this ideal than conventional OOP languages, including Smalltalk.) However, encapsulation is not at all unique to OOP; e.g. abstract data types are fully encapsulated but do not require all function calls to look like message passing.
I think that "OOP" is an incredibly overloaded term which makes it difficult to speak about intelligibly or usefully at this point.
Are we talking about using classes at all? Are we arguing about Monoliths vs [Micro]services?
I don't really think about "OOP" very often. I also don't think about microservices. What some people seem to be talking about when they say they use "OOP" seems strange and foreign to me, and I agree we shouldn't do it like that. But what _other_ people mean by "OOP" when they say they don't use it seems entirely reasonable and sane to me.
> overloaded
I see what you did there
"I don't really think about "OOP" very often. I also don't think about microservices."
Why even comment in an article about those topics then?
Primarily poor wording on my part.
I think in terms of language features and patterns which actually mean something. OOP doesn't really mean anything to me, given that it doesn't seem to mean anything consistent in the industry.
Of course I work with classes, inheritance, interfaces, overloading, whatever quite frequently. Sometimes, I eschew their usage because the situation doesn't call for it or because I am working in something which also eschews such things.
What I don't do is care about "OOP" is a concept in and of itself.
Well.. I don't understand how you can read a confused and muddled article by someone who doesn't want to know the difference between JavaTM and one of its notable choices in the many dimensions of language choices and not wish to be a little more enlightened as to the difference between hiring an OOP monkey or a VMware jockey to smash some bits about.. The article is like a poster child for taking an hour to learn what your profession is about.
Respectfully, it's not clear to me what you're saying. You're clearly displeased with both the author of the article and with myself, but beyond that, I'm not sure what your thesis is.
We give features of a profession terms so we can refer to them independently and make reasonable discussions that don't confuse people as to which traits we think something that is neither Java nor Python (and so need not match either of them on every dimension) should have.
For example, I hate Java because of OOP, but strong typing can make a lot of bad in a language tolerable. Does the writer of the article agree with me? They don't seem able to understand whether they do.
I think this is some combination of strawman, and subset of all cases. You can lament complications of OOP. You can also lament the complications of docker, kubernetes, HTTP APIs etc. These aren't mutually exclusive, and they don't span the breadth of programming techniques. I prefer avoiding all of these.
Anecdotally, I've replaced OOP with plain data structures and functions.
>> _Anecdotally, I've replaced OOP with plain data structures and functions._
I think this is why FP is becoming more popular these days but I'm not sure some people get why. The problem with OOP is you take a data set and spread it all over a 'system' of stateful (mutable) objects and wonder why it doesn't/can't all fit back into place when you need it to. OOP looks great on paper and I love the premise but...
With FP you take a data set and pass it through a pipeline of functions that give back the same dataset or you take a part of that data out, work on it and put it straight back. All your state lives in one place, mutable changes are performed at the edges, not internally somewhere in a mass of 'instances'.
I think micro services et al try to alleviate this by spreading the OO system's instances into silos but that just moves the problems elsewhere.
IME microservices solve engineering process problems (i.e. synchronization, enforcement of interface boundaries, build and test scale issues), not technical problems in the product.
I agree, very true when used for purposes as you noted. I guess my point was more about using them as a way solve the underlying problems a large OO system can develop. Microservices enforce you to package data sets for transport, it's very functional if you only take the data and transport into consideration, the mess can still happen within the microservice though.
It is somewhat interesting to realize micro services are conceptually solving the same problem that OOP despite working in such different areas.
Though OOP is just one step - structured programming works on the same problem.
>> Anecdotally, I've replaced OOP with plain data structures and functions.
Agreed. I think objects/classes (C++) should be for software subsystems and not so much for user data. Programs manipulate data, not the other way around - polymorphism and overloading can be bad for performance.
Objects/classes work best for datastructures (IMO).
Outside that usecase, I think polymorphism via inheritance is generally a mistake.
Programs manipulate data and datastructures organize that data in a way that's algorithmically efficient.
The main issue with OOP is that without a very clear abstraction, it can be almost impossible to reason about code as you end up needing to know too much about the hierarchy of code to correctly understand what will happen next. As it turns out, most programmers are pretty bad at managing that abstraction boundary.
OOP is like alcohol: enjoyable in moderation but dangerous in excess.
In moderation, an object is a data structure with associated functions (methods) that acts as a kind of namespace. If your data structure and functions are separate, you might start having function name collisions.
Hopefully we won't see a prohibition against OOP.
I'm not really qualified to talk about either topic at length, but my impression is that the Microservice crowd is kind of a different group than the anti-OOP crowd.
As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.
I get not wanting crazy multi tier class inheritance, that seems like a disaster.
In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema. Turns out - it's super hard without an ORM or at least some kind of object layer. I got halfway through it before I realized what I was making was actually an ORM.
Please feel free to let me know why this is all an awful idea, or why I'm doing it wrong, I genuinely am just winging it.
OOP is not simplifying CRUD or DB ops because you want to batch.
You don’t want lazy loading. You don’t want to load 1 thing. You don’t want to update 1 thing.
You want to actually exploit RETURNING and not have the transaction fail on a single element in batch.
If you care about performance you do not want ORM at all. You want to load the response buffer and not hydrate objects.
If you ignore ORM you will realize CRUD is easy. You could even batch the actual HTTP requests instead of processing them 1 by 1. Try to do that with a bunch of objects.
I would personally never use ORM or dependency injection (toposort+annotations). Both approaches in my opinion do not solve hard problems and in most cases you don’t even want to have the problems they solve.
You're not wrong.
It's fashionable to dunk on OOP (because most examples - like employee being a subtype of person - are stupid) and ORM (because yes you need to hand write queries of any real complexity).
But there's a reason large projects rely on them. When used properly they are powerful, useful, time-saving and complexity-reducing abstractions.
Code hipsters always push new techniques and disparage the old ones, then eventually realise that there were good reasons for the status quo.
Case in point the arrival of NoSQL and wild uptake of MongoDB and the like last decade. Today people have re-learned the value of the R part of RDBMS.
Large projects benefited from OOP because large projects need abstraction and modularization. But OOP is not unique in providing those benefits, and it includes some constructs (e.g. inheritance, strictly-dynamic polymorphism) that have proven harmful over time.
Inheritance == harmful is quite an extreme position.
It may be extreme, but it's very common. It's probably the single most common argument used against OOP. If you drop out inheritance, most of the complaints about OO fall away.
Almost all languages have some sort of object representation, right? Classes with their own behavior, DTOs, records, structs, etc.,. What language are you working in? If you're coupled to a specific database provider anyway there's usually a system table you can query to get your list of tables, column names, etc., so you could almost just use one data source and only need to deal with its structure to provide all your endpoints (not really recommending this approach).
This is probably the correct solution for this use case, but obviously and objectively much harder than object.get(id=1).
I was mainly doing this in Go, posted more in a side post.
> As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.
I've heard this a lot in my career. I can agree that most object-oriented languages have had to do a lot of work to make CRUD and database operations easy to do, because they are common needs. ORM libraries are common because mapping between objects and relations (SQL) is a common need.
It doesn't necessarily mean that object-oriented programming is the best for CRUD because ORMs exist. You can find just as many complaints that ORMs obfuscate how database operations really work/think. The reason you need to map from the relational world to the object world is because they are different worlds. SQL is not an object-oriented language and doesn't follow object-oriented ideals. (At least, not out of the box as a standardized language; many practical database systems have object-oriented underpinnings and/or present object-oriented scripting language extensions to SQL.)
> it's super hard without an ORM or at least some kind of object layer
This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.
It might also be confusing the concepts of "data structure" and "object". Which most object-oriented languages generally do, and have good reason to. A good OOP language wants every data structure to be an object.
The functional programming world still makes heavy use of data structures. It's hard to program in any language without data structures. FP CRUD can be as simple as four functions `create`, 'read`, `update`, and `delete`, but still needs some mapping to data structures/data types. That may still sound object-oriented if you are used to thinking of all data structures as "objects". But beyond that, it should still sound relatively "easy" from an FP perspective: CRUD is just functions that take data structures and make database operations or make database operations and return data structures.
A difference between FP and OOP's view of data structures is where "behaviors" live. An object is a data structure with "attached" behaviors which often modify a data structure in place. FP generally relies on functions that take one data structure and return the next data structure. If you aren't using much in the way of class inheritance, if your "objects" out of your ORM have few methods of their own, you may be closer to FP than you think. (The boundary is slippery.)
> This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.
I mean, I think this is likely the case. So, I tried this, for example in Go, which is not really a proper functional programming language as I understand it, but is definitely not object-oriented.
So for my use case, I wanted to be able to take a database schema and programmatically create a set of CRUD endpoints in a TUI. Based on my pretty limited knowledge of Go, I found this to be pretty challenging. At first, I built it with Soda / Pop, the ORM from Buffalo framework. It worked fairly well.
Then I got frustrated with using Soda outside Buffalo, and yoinked the ORM to try and remove a layer. Using vanilla Go, it seems like the accepted pattern is that you create separate functions for C R U and D, as you referred to. However, it seems like this is pretty challenging to do programmatically, particularly without sophisticated metaprogramming, and even if you had a language which had complex macros or something, that is objectively significantly harder than object.get() and object.save().
Finally, I put GORM back in, and it worked fine. And GORM is a nice library, even though I think having an ORM is not the "Go" way of doing things in the first place. But also, Gorm is basically using function magic to feel like OOP. And maybe the problem with this idea is that it's not "Proper Go" to make a thing like this, it would be better to just code it. There's an admin panel in the Pagoda go stack which relies on ent ORM to function as well. I can only assume the developer motivations but I assume they are along the same lines as my experience.
I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.
In the real world, in business logic, objects do things. They aren't just data structures.
To summarize, CRUD seems pretty easy in any language, programmatically doing CRUD seems super hard in FP. Classes make that a lot easier. Maybe we shouldn't do that ever, and that's fine, but I'm a Django guy, I love my admin panels. Just my experience.
> I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.
Methods at all make a language OOP. Class inheritance is almost a side quest in OOP. (There are OOP languages with no class inheritance.)
Go seems quite object-oriented to me. I would definitely assume it is easier to use an ORM in Go than to not use an ORM.
I don't use a lot of Go, so I can't speak to anything about what the "proper Go" way of doing things is.
I could try to describe some of the non-ORM, functional programming ways of working with databases as I've seen in languages like F#, Haskell, or Lisp, but I'm not sure how helpful that would be to show that CRUD is not "super hard" in FP especially because you won't be familiar with those languages.
The thing I'm mostly picking up from your post here is that you like OOP and are comfortable with it, and that's great. Use what you like and use what you are comfortable with. OOP is great in that a lot of people also like it and feel comfortable with it.
I get how to do CRUD in FP - I don't get how to generate endpoints automatically in CRUD. Is anybody doing that?
"OOP actually makes a ton of sense for CRUD and database operations."
Not at all OOP is great at simulations, videogames, emergent behaviour in general. If you do crud with oop you will complain about overengineering.
I think that's fair, and I generally prefer a lighter stack for CRUD, but I still love Django and Rails. Maybe just having "objects" is not enough to qualify as OOP but for many use cases, the convenience offered by "Batteries Included" is worth the trade off in "overengineering".
If I have to build an app, I'm going for rails. If I'm building a back end, I'm reaching for Go. If I need to integrate with Python libraries, Django is great.
But ask me again when I get to the other side of some OCAML projects
Even in video games, I avoid inheritance, I always much prefer composition. Build a complex object from many small objects, then vary behavior with parameters rather than deriving a child class and overriding methods.
Right that's still OOP.
Maybe I am not exactly what you mentioned, but I do feel OOP set us back about a decade or two and do think the general concept of microservices is a good idea. But maybe to your point, these beliefs are completely orthagonal to one another, and why they are mentioned as being related baffled me. To be honest the whole post baffled me and I am disappointed I can not downvote the submission. Anyway more to your topic- OOP in the early 2000s was put on this massive pedestal and trying to point out its flaws would often get you chastised or shunned, and labeled that you just didn't get OOP and such. But the object hierarchies often became their own source of inflexibility, and shoehorning something new into them could often be very difficult and often involve an hour or three of debate/meetings on how to best make teh change.
Microservices are more about making very concrete borders between components with an actual network in between them... and really a contract that has to be negotiated across teams. I feel the best thing this did was force a real conversation around the API boundary and contract, monoliths turn to a big ball of mud once a change slips through that passes in an entire object when just a field is needed, and after a few of these now everything is fairly tightly coupled- modern practices with PRs could prevent a lot of this, but there is still a lot of rubber stamping going on and they don't catch everything. Objects themselves are fine ideas, and I think OOP is great when you focus on composition over inheritance, and bonus points if the objects map cleanly into a relational database schema- once you are starting getting inheritance hierarchies, they often do not. If I had to guess, your experience with OOP is mostly using ORMs where you define the data and it spits out a table for you and some accessor methods, and that works... until it doesn't. At a certain level of complexity the ORM falls apart, and what I have seen in nearly every place I have worked at- is that at some point some innocuous change gets included and now all of a sudden a query does not use an index properly, and it works fine in dev, but then you push it to prod and the DB lights on fire and its really difficult to understand what happened. The style of programming you are talking about would be derided by some old heads as "C with objects" and not "really" OOP. But I do think you are onto something by taking the best parts and avoiding the bad.
"Micro" services aren't great when they are taken to their utmost tiny size, but the idea of a problem domain being well constrained into a deployable unit usually leads to better long term outcomes than a monolith, though its also very true that for under $10k you can get 32 cores of xeons and about 256 gigs of ram, and unless you are building something with intense compute requirements, that is going to get you a VERY long way in terms of concurrent users.
From one rambler to another: I’m not sure who or what this aimed at, as it goes all over the place.
Cloud: Separating resources from what gets deployed is a classic separation of concerns.
I don’t miss the days where I had to negotiate with the IT team on hardware, what gets run, and so on.
Personally, I believe the next evolution is a rebalkanization into private clouds. Mid-to-large companies have zero reason to tie their entire computing and expose information to hosting third parties.
OpenAPI: The industry went through a number of false starts on formal remoting calls (corba, dcom, soap). Those days sucked.
The RESTful APIs caught on, and of course at some point, the need for a formal contract was recognized.
But note how decoupled it is from the underlying stack: It forces the engineers to think about the contract as a separate concern.
The problem here is how fragile the web protocol and security actually is, but the past alternatives offer no solution here.
> At around the same time, some bright spark realised that programmers - a population of people not known for good hygiene [ ... ]
OK, I'm out.
No, "we" are not replacing OOP with something worse. "We" are replacing layers of stupid shit that got layered on top of, and associated with OOP, with different renderings of the same stupid shit.
I have been programming since 1967. Early in my college days, when I was programming in FORTRAN and ALGOL-W, I came across structured programming. The core idea was that a language should provide direct support for frequently used patterns. Implementing what we now call while loops using IFs and GOTOs? How about adding a while loop to the language itself? And while we're at it, GOTO is never a good idea, don't use it even if your language provides it.
Then there were Abstract Datatypes, which provided my first encounter with the idea that the interface to an ADT was what you should program with, and that the implementation behind that interface was a separate (and maybe even inaccessible) thing. The canonical example of the day was a stack. You have PUSH and POP at the interface, and the implementation could be a linked list, or an array, or a circular array, or something else.
And then the next step in that evolution, a few years later, was OOP. The idea was not that big a step from ADTs and structured programming. Here are some common patterns (modularization, encapsulation, inheritance), and some programming language ideas to provide them directly. (As originally conceived, OOP also had a way of objects interacting, through messages. That is certainly not present in all OO languages.)
And that's all folks.
All the glop that was added later -- Factories, FactoryFactories, GoF patterns, services, microservices -- that's not OOP as originally proposed. A bunch of often questionable ideas were expressed using OO, but they were not part of OO.
The OOP hatred has always been bizarre to me, and I think mostly motivated by these false associations. The essential OOP ideas are uncontroversial. They are just programming language constructs designed to support programming practices that are pretty widely recognized as good ones, regardless of your language choices. Pick your language, use the OO parts or not, it isn't that big a deal. And if your language doesn't have OO bits, then good programming often involves reimplementing them in a systematic way.
These pro- and anti-OOP discussions, which can get pretty voluminous and heated, seem a lot like religious wars. Look, we can all agree that the Golden Rule is a pretty good idea, regardless of the layers of terrible ideas that get piled onto different religions incorporating that rule.
> And while we're at it, GOTO is never a good idea, don't use it even if your language provides it.
Good luck with that if you're a C programmer.
Well sure, but don't use it to implement if/while/for.
> Other bright sparks jumped in on the action: what if this separation did not rely on the personal hygiene of the programmers - something that should always be called into question for public health reasons - and was instead enforced by the language? Components might hide their implementation by default and communicate only though a set of public functions, and the language might reject programs that tried to skip around these barricades. How quaint.
Sounds like C.
When was the last time you did OO against a .h file without even needing access to the .c file?
> And so, the process/network boundary naturally became that highest and thickest wall
I've had it both ways. Probably everyone here has. It's difficult to make changes with microservices. You gotta open new routes, and wait for people to start using those routes before you close the old ones. But it's impossible to make changes to a monolith: other teams aren't using your routes, they're using your services and database tables.
It's older than C. But yes, C does module encapsulation.
C is capable enough to program in a way that is basically OOP and without using non-idiomatic code. The C++ object system wasn't created in a vacuum.
> Sounds like C.
Data hiding is just one of the concepts of OOP. Polymorphism is another one.
How that's implemented is another question. You can do OOP in plain C, several libraries kinda did that, like GTK. Other languages tried to support these concepts with less boilerplate, giving rise to classes and such. But OOP is not about language features, it's fundamentally a way of designing software.
Polymorphism is trivial in C and even less restrictive than in other OOP-first languages. The reason why GTK (actually GObject) is so overengineered is not due to classes, but because it allows to create classes and types at runtime.
Many great design patterns have come from OOP and have found their home in functional languages or libraries.
Dependency injection has to be the most successful one, but there's at least another dozen good ideas that came from OO world and has been found to be solid.
What has rarely proven to be a good idea instead is inheritance at behavior level. It's fine for interfaces, but that's it. Same for stateful classes, beyond simple data containers like refs.
You can even have classes in functional programming word, it's irrelevant, it's an implementation detail, what matters is that your computations are pure, and side effects are implemented in an encoded form that can be combined in a pure way (an IO or Effect data type works, but so can a simple lazy function encoding).
I think that the author is confusing OOP with Java. And I agree, Java is great and had most of the things we do now with overbloated infrastructures at least a couple of decades ago. We like to reinvent the wheel, that's what we do, but each time we give it another name. Then we complain about the old good time when everything could run on a Pentium and 64MB RAM.
I think it would be very nice if the author provided citations for their assertions about how OOP was adopted. I was alive and gigging as a coder in the 80s when people had interminable arguments about Structured Programming, unstructured programming, OOP and every now and again LISP or FORTH. None of what the author mentions rings true. Standard interfaces came DECADES before anyone started talking about OOP. Rationalizing standard interfaces was mentioned in mythical man month. Structured Development was all the rage in the early 80s when I started selling 6502 op codes. Half the people I talked to in the 80s insisted C++ WAS OOP while the other half found that quote from Alan Kay who said C++ wasn't what he was thinking of when he invented the term "Object Oriented Programming."
I think the author is correctly picking up on how messy changes in best common practice can be. Also, different communities / verticals convert to the true religion on different schedules. The custom enterprise app guys are WAAAAY different than games programmers. I'm not sure you'll ever get those communities to speak the same language.
OOP is dead. Long live OOP.
> Every call across components acrues failure modes, requires a slow march through (de)serialisation libraries, a long trek through the kernel’s scheduler. A TLB cache invalidation here, a socket poll there. Perhaps a sneaky HTTP request to localhost for desert.
And this has tangible costs, too. I saved more than $10k a month in hosting costs for a small startup by combining a few microservices (hosted on separate VMs) into a single service. The savings in development time by eliminating all of the serialization layers is also appreciable, too.
I have found that Protocol-Oriented Programming is basically "OOP without classes."
Protocols have their issues, though[0]. Not exactly the same results.
[0] https://littlegreenviper.com/the-curious-case-of-the-protoco...
Microservices are a bad pattern. Function as a service is better, because that architecture tends to assume vertical integration within the function and you can leverage a shared business logic library across an organization. Having service-to-service communication is the thing that makes microservices suck, if you can give people autonomy while having a shared systems language and keeping computation on one system it works ok.
Micro services are neither bad or good, they are simply misunderstood.
They are a solution to communication and organizational challenges, not a technical one.
As every other solution they have cons, some of which you have outlined.
Nah, I’ll agree with parent: they’re objectively bad. They turn what could be IPC into network calls, and because everything uses frameworks and ORMs, it’s all slow as hell.
“We can move faster” (but at the cost of our product being slower).
Microservices are a great solution to the problems you run into at high scale.
But they come at great cost. If you don't actually HAVE the problems they solve, do everything in your power to avoid them. If you can just throw money at larger servers, you should not use microservices.
Biggest problem here is way too many people think they have those problems or they will have them in a second.
OOP today is really CBP (class based programming). Its really just imperative code hidden in methods that work on mutable state. Its gross. This is most common in Java, PHP and the likes.
There's a time and a place for all of these things. Like everything in life, many people pick the wrong time and wrong place.
Someone is bound to quote Dijkstra here:
> To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
Oh gosh I clicked "Enter the tarpit". I hope I didn't blacklist my IP or something.
Silly link, though. I highly suggest going back and clicking on the link.
> Perhaps a sneaky HTTP request to localhost for desert.
Typo: dessert
I had a job interview where the guy asked me something along the lines of do I have experience with OOP or Microservices,or both. So I answered how I like both and feel that MS are Objects at the network layer, the dude cut me off and told me to stick to the question. Absolutely gut wrenching, I was being evaluated by who knows who, in a slot of 30 minutes, maybe his entire job was repeating the same set of questions 16 times per day and he was unironically writing down yes/no next to "Microservices?" "OOP"?
>> So I answered how I like both and feel that MS are Objects at the network layer, the dude cut me off and told me to stick to the question.
OMG! As an interviewer I would have asked you to elaborate on that response. Perfect opportunity to see if and how the candidate thinks and how deep some pocket of understanding goes.
do not iterate the bad feeling of this, the guy was not the level to understand the abstraction both give. while i have felt this sentiment towards things being reinvented, we have to assume some innovation is incremental, and the new thing looks like the old but allows kool new stuff such as networking.
This is so much nonsense. Contracts and interfaces have little to do with OOP. If you don't like inheritance, then talk about OOP. It's fine. Many people, including myself, don't like inheritance. On the other hand, if it's interfaces and contracts that have you bothered, parse-dont-validate to your heart's content. If the domain allows it. Just don't drag unrelated concepts into this particular discussion.
OOP is about messages according to the inventor - messages are about the interface. People get confused with objects which languages without messages don't have (you can get them, but they are not first class in the language)
Even plain objects though, the point isn't the inheritance! The point is to put an interface on the data. Inheritance is sometimes useful because, but there is a reason we keep screaming "prefer composition to inheritance" (even though few listen)
TL;DR: OOP is being replaced by microservices (and even more extreme ideas), which is much worse. Yeah, I have to agree with this author.
I will say that service-oriented architecture does have some advantages, and thus sometimes it's the right choice. Parallelism is pretty free and natural, you can run services on different machines, and that can also give you scalability if you need it. However, in my experience that architecture tends to be used in myriad situations where it clearly isn't needed and is a net negative. I have seen it happen.
The idea of "we are replacing <thing> with microservices, which is much worse" is true for almost any <thing>, OOP or not.
There is a kernel of insight here that is pretty valuable. Interfaces are a form of security and is a result of a loss of trust. It's interesting to think about how far trust can be extended before it breaks down.
Nonsense. Network service layer separation solves a different problem than OOP. It doesn't replace it. Services and containers bring features and capabilities that OOP doesn't provide. It's orthogonal.
But I think that's really the point: it gets applied to problems it was clearly not meant to solve. The article doesn't really get into that, but that's how I'm reading it because I've seen it happen. To some programmers, every component of a program is its own service, even when there's no need to have multiple processes. The only tool they have is that hammer, so everything has to be a nail.