How many of us in the software development realm have heard the term 'Object Oriented' used, abused, and 'hackneyed to hell' to the point of bile formation? It is as if somehow everyone assumes that if an application is 'Object Oriented', or that the language it is written in is 'Object Oriented', that the application will naturally be FASTER and BETTER than an application that is NOT 'Object Oriented', and that the time for development will be SHORTER, and maintenance EASIER, than "other kinds" of code.
Well, to some extent this might be true. A PROPERLY DESIGNED Object Oriented application, like ANY properly designed application, will most likely be either faster OR better, and may take less time to develop, and may even be easier to maintain, but not necessarily will all of the above be true. In fact, it seems to me that (more often than not) NONE of the above will be true! I have seen SO MANY TIMES the ABUSE of so-called 'Object Oriented design', from backwards logic to code-bloat, as well as the tendency to spend DAYS 'creating an object to do that' when an hour of coding 'the old fashioned way' would have done the job, and more efficiently at the same time.
So, if you're not entirely sure what 'Object Oriented' means, this section is for you! To
begin with, let's define what an OBJECT is.
OBJECT: (n) a collection of data and the methods that work with it,
In other words, an OBJECT is an entity, usually 'abstracted' (more on this later), that has a set
of pre-defined methods that query and alter the data associated with the 'Object'. In its purest sense,
ONLY the pre-defined methods have access to the data, with the actual data being intentionally
hidden from view.
One typical 'Object' that nearly all programmers would be familiar with is referenced by a FILE HANDLE. In other words, an open disk file qualifies as being an object. To the application, the 'File Handle' is just a number that identifies the 'Object'. Each open file would have a unique 'File Handle' that references a particular 'instance' of an 'opened file'. And whenever this number is passed to one of the many functions that accepts a 'File Handle', you can query or modify the data associated with the 'opened file', which would be the actual data on the hard drive. In this sense, the 'opened file', referenced by the 'File Handle', along with the actual data on the hard drive plus all of the file manipulation utility functions (read, write, seek, etc.), would qualify as an 'Object'.
So if you think about it, everyone who has used a disk file has already been using an 'Object'. And in a pure sense, Object-Oriented Programming (OOP) would require creating internal representations of all of the program's internal data in a similar way, so that higher level functions cannot query or modify the actual data without using one of the owning object's member functions.
There are many language that claim to be 'Object Oriented', some more than others, and many of which that are actually INTERPRETIVE rather than compiled. The most popular Object Oriented language is 'Java', and the second most popular 'Object Oriented' language is C++. Java is essentially more 'Object Oriented' than C++ because it lacks the low-level C compatibility that C++ has. But C++ can easily be used to create 'Object Oriented' code, so for the sake of example and familiarity I will be using C++ for my examples. C++ also has the general advantage of compiling into native code, whereas Java typically creates pseudo-code that runs in a Java Virtual Machine. 'Java Script', familiar to web page designers, is purely interpretive, but has similar language syntax to Java. They are not the same.
Object Oriented languages allow you to easily create a single entity (such as a 'class') that contains both data-oriented members and member functions. A well-designed Object Oriented language will also allow for 'Polymorphism', 'Inheritence', and 'Abstraction'. Some languages extend these concepts into their own private realm, but for the moment we'll stick to the more 'classic' definitions.
The idea of 'Inheritence' is simple: you have an object, and you want it to behave the same as another
object, 'except for these differences'. Maybe you are adding new functions that did not exist in the
original, or maybe you are changing the behavior of the existing functions. To simplify implementation
of your object, AND (possibly) to allow for 'Polymorphism' later on, you use the original object as
a 'base' for your new object. In C++ the code might look like this:
class objA { protected: int data; public: void Manipulate(void *); int Query(void); }; class objB : public objA { public: int Do_B(void *); };In this (somewhat trivial) example, 'objB' will inherit all of the data members and member functions defined for 'objA', and adds its own member function 'Do_B'. The C++ directives 'public' 'private' and 'protected' determine which layers have access to particular data members and member functions. In this case, all derived classes (including objB) can access 'data', but users of either 'objA' or 'objB' cannot access the 'data' member and must use one of the member functions. So, if you call 'Query' for either an 'objA' or an 'objB' object, it will call the objA version of 'Query', passing a pointer to 'objA' as the object pointer in that member function. This last point is a segue into the next topic.
Because 'objB' (in the above example) inherits all of the functionality and data of 'objA', you could
successfully pass a pointer to an 'objB' to any function that uses 'objA', and it would work as expected.
This is the essence of 'Polymorphism', in that any class can be treated as one of its base classes in
a method that accepts the base class. But our trivial example is missing a few key features to make it
truly 'Polymorphic', since the destructor (implied or specific) cannot be invoked for an 'objB' from a
place in the code where it is assumed to be an 'objA'. For this, C++ has 'virtual' members that use a
table specific to the actual class to define which version of a function is called.
class objC { protected: int data; public: objC(); virtual ~objC(); virtual void Manipulate(void *); int Query(void); }; class objD : public objC { private: void *data2; public: objD(); virtual ~objD(); virtual void Manipulate(void *); int Dee_D(void *); static objD * GetObjD(objC *); };For our new example, 'objD' inherits everything from 'objC' and adds a new method, 'Dee_D' and a new 'static' method (which is not bound to a particular object instance) 'GetObjD'. It also has a virtual destructor '~objD' that is called in place of the 'objC' destructor '~objC' whenever you delete an object that is an 'objD', regardless of whether the calling function sees it as an 'objC' or an 'objD'. Similarly, when you call 'Manipulate' on an 'objD', you call the 'objD' version of 'Manipulate' even when the caller sees it as an 'objC'. This is true 'Polymorphism', since you are able to do everything that is possible for 'objC' (including object destruction) on 'objD' while treating it as an 'objC'.
For well designed 'Object Oriented' programming, Polymorphism is extremely important. It allows you to design a more generalized class, maybe even a class that has no data members at all, and then use a pointer to the base class in lieu of the actual (derived) class. This allows programmers to write code that ONLY needs to operate on the base class, and it will still work with any class that was derived from this same base class. And that is a segue into the next topic.
The 'objC' and 'objD' example above can easily show how 'Abstraction' works. The 'objC' class has members that are declared 'virtual', and as such they can be called for any derived object such that the derived object's version will always be invoked. So if the 'objD' version of 'Manipulate' does something with it's own data member 'data2', which is invisible when the object is treated as an 'objC', it will still be properly dealt with by calling the Manipulate member function as either an 'objC' or an 'objD'. In essence, the true nature of the 'objC' is ABSTRACTED so that you don't know if it's an 'objD' or any other derived class of 'objC'. All of the additional data needed for derived classes is hidden from view, and you only have access to methods and data that is exposed by 'objC'.
Of course, in those cases where you DO need to know if it's an 'objD', it's probably a good idea to surface some kind of member function that would return a typed pointer to the desired derived class. This way a function could test for 'objD' support and obtain an 'objD' pointer that has the additional data members and member functions. In this case, the 'GetObjD' function would perform this function, returning an 'objD' pointer from an 'objC' object if it is, in fact, an 'objD' object. Note that this is a trivial example illustrating the possibility, and that the implementation of this function /would likely be non-trivial.
I suppose it is only human nature to want to make use of your newly-learned skills, or new ideas that you have wholeheartedly committed yourself to. It's exciting! It's new! It's possibly even revolutionary! And in the somewhat emotional appeal of going down this new, adventurous path, you end up committing yourself to months of "getting nowhere fast" product development, until you reach a point where you find that a 6 month or 1 year project extends into multiple years, with no light at the end of the tunnel, no more funding from the Board of Directors (from lack of progress), and a lot of half-finished 'Objects' that haven't even been put together to form anything even RESEMBLING a functional application. Your project dies on the vine, and you now have an even tougher time selling the ideas for the NEXT project to upper level management. And you'll also be lucky if half of your department isn't laid off, because of the perception that it's just too expensive to continue, by those who have the purse strings. It's all about RESULTS to them, and you don't have any.
"What went wrong?" you say, because everyone was working overtime, and being salaried employees, there was no additional cost, only additional work being done. Everyone was scrambling, working hard, making slow (but definite) progress, and STILL the project was protracted into extinction. All of the 'right things' were being done, you say, but the project just died.
And others may be in a similar position, having bought into this 'Object Oriented Obsession', where the project IS completed on time, but only after hiring an excessively large development staff to make it work. And the product itself is a bit 'kludgy', and maintaining the code is very very expensive, and the customers aren't happy, and everybody blames YOU, the project manager, for not dealing with the situation properly.
These hypothetical scenarios, however, came about because of an almost religious obsession with 'Object Oriented' code, to the extent where EVERYTHING (including the most trivial of functionalities) had to be 'Object Oriented'. The resulting inflexibility in design requirements caused excessive development effort being spent on activities that had no real benefit OTHER than meeting the 'object oriented' design requirement. And I think all of us may have seen, at one point or another, a project 'in trouble' that resembles this scenario all too much.
Object Oriented programming is supposed to increase reliability and maintainability, and when properly
implemented, it will do just that. But if your object oriented code takes many times as long to run, or many
times as many lines of code to implement, it's 'Object Oriented Obsession' and not 'Object Oriented Programming'.
There is no need to create a 'Hello World' application that looks like this:
class HelloWorld; class FileObject; class HelloWorldException; class FileException; int main(int argc, char *argv[]) { try { HelloWorld *pHW = new HelloWorld; FileObject *pStdOut = new FileObject; pStdOut->BindToStdOut(); pHW->Print(pStdOut); pStdOut->Flush(); delete pStdOut; delete pHW; } catch(HelloWorldException *exception) { exception->ErrorLog(); delete exception; return -1; } catch(FileException *exception) { exception->ErrorLog(); delete exception; return -2; } return 0; } ... etc ...All of that unnecessary exception handling, all of that unnecessary object creation and initialization, all of that unnecessary CRUFT as a friend of mine would put it, in lieu of a one-line program using 'fputs' or 'printf' is EXACTLY what I mean by 'Object Oriented Obsession'. It took way too long for me to even come up with that, let alone debug any syntax mistakes I might have made or re-research how to use 'try/catch' blocks in C++ (something I don't do a lot for various reasons), or to properly call destructors within my exception handlers, if that's necessary. All of these over-complex details just increase the frustration, tension, and development time associated with what should otherwise have been a very simple 'hello world' application.
To combat the 'logical extreme' of Object Oriented Obsession, developers can keep a few concepts in mind during the initial design phase, and at various times during periods of code review, and whenever there is a major design revision that needs to be performed.
We've all heard THIS one before. And I'll say it another way: According to Occam's Razor, the simplest explanation is usually the most correct. And it applies to coding, as well. The simplest implementation is probably the correct one. If you can do something with fewer lines of code, particularly if it's easier for the casual observer to understand and for the average 'non-indoctrinated' programmer to maintain, it is more likely the correct solution. This applies to the programming language choice also, since certain operations in some languages take a lot more coding effort than in others. All other factors being equal, you should choose the methodology and programming language that keeps the project as simple as possible.
Have you ever gone back and maintained code you worked on years ago, only to find that you can't even figure out how it works because you DO NOT REMEMBER what you were thinking at the time you wrote it? Sufficient comments and self-documenting code may not be enough if the algorithm is not intuitively obvious from all of these things combined. And if the object design is bloated or convoluted, this makes things even worse. Alternately, of course, it may be someone ELSE who wrote the offending code, and now it is YOUR task to clean it up. You're supposed to be MAKING money for your employer, but this kind of code COSTS money. Code should be easy to maintain, by you or by someone else.
One of the WORST and most 'logically backwards' features ever added to an object is the ENUMERATOR. It typically requires that you invent one or more new classes to contain the various 'things' being enumerated, allocating and freeing small chunks of memory (see 'malloc madness') as you sequence through the enumeration (a 'for each' loop, typically, depending upon the language), and spinning its programatic wheels in the mud the entire time while barely making forward progress. The only time you EVER need an enumerator class is when you write the object for use by a different language, either for the compiler or an interpretive language, where the object's properties and methods must be determined at compile-time (for the compiler) or at run-time (for the interpreter), and you don't care that much about efficiency any more since it's necessary to enumerate these things. But PLEASE, leave the enumerator OUT of the class definition for native code, and move it to a wrapper library that supports your compiler or interpretive language. There are much better ways to do things than to 'enumerate everything'.
Excessive use of anything that allocates a small block of memory to instantiate 'yet another object' can quickly result in a fragmented memory pool, excessive garbage collection activity, and longer and longer delays whenever new objects are created or old objects are destroyed. Fortunately, there are methods you can use to combat this, which include proper use of 'array objects' that allocate memory in large enough blocks that combat this 'growing' problem. For a small number of objects, allocating memory for each of them is not a problem. The problem arises when you have hundreds or thousands or even millions of objects, each allocated from the local memory pool, and you continuously allocate and de-allocate them at random. This results in code that wastes a LOT of time doing 'memory arena' housekeeping activities, and you can also fragment memory bad enough to greatly increase the virtual footprint of your application over what it should have been in the first place.
In short, you should use array template classes that allocate large numbers of objects in a single block of memory at the same time, whenever you expect to create and manipulate a large number of objects. Alternately, you can overload constructors and destructors and 'new' operators (particularly in C++) to allow for your own custom memory allocator. A well-designed allocator would probably consist of a series of pre-allocated memory pools that could be increased or decreased in number 'as needed'. In fact, this is the preferred method for pre-allocated memory blocks within the internals of operating systems like Linux, which pre-allocate memory blocks and store them as linked lists, increasing the available number of blocks as needed to handle demand.
But sometimes an even better way would be to declare your classes as automatic or member variables
directly. A typical example might be:
class objE { public: objD d_member; objC c_member; }; void MyFunction(void) { objE theObj; ... theObj.d_member.Manipulate(&something); ... }By creating the objects as automatic variables, rather than pointers (then using 'new' and 'delete' to allocate or destroy the object) you accomplish several things: First, you're avoiding the memory allocations so the code will be slightly faster. Second, you're ensuring automatic object cleanup no matter where you return from the function, since the function will be aware that 'theObj' needs to be destroyed before returning. Also, when 'theObj' is destroyed, its members 'd_member' and 'c_member' will also be properly destroyed, and so forth. This hierarchical automatic cleanup of the object makes your code a LOT cleaner as well as more reliable. It's difficult to "forget to clean up" and get memory leaks when all of your members and object variables are declared this way.
Since the things I've pointed out here are not only 'all too common', they are also 'all too easy' and therefore you find (mostly inexperienced) programmers and managers falling into them ALL of the time. So what can you do to avoid (accidentally or otherwise) jumping into these pits? Some of the answers are even more obvious to the point of embarassment. In effect, you go back to 'first principles' and start with the project's design and goals.
When I was in college I learned about 'Top Down' design, and then applying it in a way that allowed you to break up a project into multiple 'sub projects', and so forth. It forms a hierarchical tree of requirements and vague descriptions of how it is accomplished, with each of the 'vague descriptions' potentially being broken into a sub-project. If done properly, the planning process is actually simple and requires few hours of meetings and individual effort to accomplish. If nothing else, you generate a sort of 'contract' of what the features will be and how they are (somewhat generically) to function. HOW they function is part of the definition of the sub-project, and does not belong at the top level. But the highest level specification will need to define how the various functions will have to act together as part of the whole (this can become a lower level spec as needed, such as an "inter-communication specification" and the top level would refer to this specification by name without the details it contains). By designing the project this way, you can avoid spending excessive time in meetings, and more time 'meeting the contract.' With specific goals, programmers can simply test that they meet those goals, and when the project is integrated, it is more likely to work.
I have read that NASA uses a method similar to this to build rockets. I suppose you could make the argument that this IS 'rocket science', then, to design the specifications at a level that it becomes possible to derive sub-project specifications from the topmost specification, and then build the project according to those specifications, testing each component against the specification, and then assembling the pieces together and testing the whole project via 'basic functionality' tests. If it were a rocket, you can't afford to crash very many of them. As a complex project, if all of the pieces work independently AND fit together, you should have a very reliable whole once the specifications have been met.
In many ways, Object Oriented Programming helps you to make these pieces fit. When possible you SHOULD use objects (or some similar method) for producing each separate functionality that is to be a part of the whole. This lends itself to separating tasks to various divisions or individuals, and when combining the whole you can 'stub out' any missing functionality or object in order to test what you have (and demonstrate progress to upper level management and Boards of Directors). But Object Oriented Programming does not have to be THE way of doing it. It's like learning music theory and then trying to apply it to EVERYTHING you do. Sometimes all that matters is that it SOUNDS GOOD.
Obviously other issues exist, and time wasted on triviality is never a good thing. Still, common sense ought to rule the decisionmaking process whenever possible. And that's ultimately my point. Obsessing on one aspect of programming will undoubtedly detract from the whole. Balance is the key. NEVER should 'Obsession on Object Oriented Programming' nor obession on any other 'buzz word of the moment' be the cause of a project's failure. Rather, taking the best of everything, and making use of whatever is most practical, makes more sense to me than anything else. There are a lot of opinions out there as to what needs to be done, and why it must be so, but the bottom line is profitability in both time and currency. Make THAT the goal, and your project should go well.