What are the practical advantages of using object orientation in the day-to-day life of a development team?

Question:

I work in a company that doesn't use object orientation, although the language allows (and encourages).

I've studied and studied object orientation, and I do my personal projects in OO, but I don't know exactly what arguments could be used to motivate a company (which is in a comfort zone, from a technical point of view) to consider developing new projects. object-oriented way.

Anyway, if I go talk to my manager, what would be the most convincing arguments in favor of object orientation?

Answer:

Standardization

Just as a child learning to read has difficulty putting letters together to form words, a more advanced one already reads whole words but struggles to put them together into sentences, and a proficient reader already "sees" whole sentences, the same occurs with programming. If you've seen a for loop that iterates through a list 1 in 1 a million times, you don't get lost in the syntax of that particular statement, your gaze already falls on the particular details. Likewise, if you've seen N examples online of a task being done in a certain way in the X language, seeing similar code becomes second nature.

If a language works predominantly with OO practices, adopting those same practices in your code makes it more readable than not adopting them. Not because OO is more readable than procedural, functional, etc, but because professionals with experience in that language are already conditioned to think that way. If a project is being made from scratch, by professionals who are not strangers to OO syntax and semantics, developing it in the most "usual" way for the platform contributes to its maintainability, especially if a group of professionals different from the one who wrote it may eventually be responsible for supporting it.

Workflow flexibility

A strictly object-oriented program has neither a "beginning" or an "end": instead, it has a set of objects capable of exchanging messages with each other. The order in which these messages are exchanged matters, of course, as it is possible to store state even globally using objects. But changing the order in which certain subroutines are invoked is (at least in theory) more doable in an OO architecture than a procedural one – as this doesn't necessarily involve restructuring the entire code.

To put it better (since my last statement is dubious), in a system where operation A always occurs before B, certain things can be assumed about its state at the time B is executed. If you decide to move B before A, you need to make an assessment of all that was supposed to be true in B and see what is still true and what is no longer true—and therefore needs to be adapted.

Object orientation does not "magically" solve this problem, but it helps to modularize the decision process: by establishing (formally or informally) that object X has certain invariants , and only programming assuming those invariants as true, you avoid falling in the addiction of writing code that "only works if A, B and C already happened". Instead, OO forces you to assess whether the state of objects involved in a certain computation are able to perform that computation, regardless of the sequence of events that brought them to that state .

Putting it simply, it is easier to create a script (in the sense of "script") to place order in a system that allows N actions than to divide a system expressed by means of a script into N actions…

abstraction levels

Ideally, we should always program at the correct abstraction level to represent the problem we are dealing with. That's why no one "programs a game", for example: you program a game engine , and you use that engine's scripting language to create the particular game. This can be done even if you do not intend to reuse this code in any other game beforehand.

Some recommend the use of DSLs ("Domain Specific Languages") to achieve this goal, but in general this is quite impractical – it's not easy to learn a new language, or at least not easy to "get the hang of" quickly on how to do action X in the new language Y. A more common language turns out to be preferable, but it doesn't make sense to limit yourself to the basic primitives of that language to evolve at the higher levels of abstraction.

This is where the use of user-defined types comes in handy: the concept of "object" may not be quite the same as the concept of "type", but they have a lot in common (and a "class" can be used to represent one kind"). By uniting data structure and functionality in the same unit, and combining the basic operations of that unit to perform complex computations (as opposed to doing so using the basic operations of its components), you are reasoning at a higher level, more appropriate for the problem to be solved.

Again, custom types are not a privilege of the OO paradigm, but they are one of the benefits of using this paradigm. If you encapsulate the numerical components of a matrix or vector it is not "for any careless programmer to mess with them by mistake" – but rather to enable you to express your computation as a series of vector/matrix operations rather than expressing it as a series of operations on scalars. If you judge that your system would benefit from greater type richness (big "if" – I personally am of the opinion that there are many cases where a basic structure is more adequate, and that using classes is overkill ), this might be an argument in favor of adopting this particular feature of OO (with or without inheritance and polymorphism).

Extensibility

It is sometimes necessary for a system to be extensible, especially when it needs to support different data formats and/or interface with third-party systems. In some cases this may be done in a purely functional way (ie a dictionary mapping a name to the function that handles that name), but in others it may be necessary to have a set of distinct but related operations (and perhaps even storing state) to perform a given task.

In this case the use of an abstract class/interface specifying what needs to be done, along with N concrete classes doing what was specified, seems to me to be a very adequate way of dealing with this scenario. Other paradigms (eg, programming in logic) may have adequate means as well, but if your options are "objects" and " if strings", I'd say you have a good argument in your hands in defense of using inheritance and polymorphism .

Scroll to Top