Hacker News new | past | comments | ask | show | jobs | submit login
OOP practiced backwards is "POO" (github.com/raganwald)
140 points by raganwald on Dec 10, 2010 | hide | past | favorite | 90 comments



Skillful practictioners of OOP, even in languages like Java, know this stuff already. It's just that the language leads newbies astray by making inheritance such a prominent concept.

To a first approximation, inheritance is always wrong. The Design Patterns book advocates composition -- building up an object out of smaller ones -- over inheritance. Yeah there's more work to delegate messages around, but if you didn't like typing you shouldn't have picked Java (or, use an editor that makes this easier).

Also, when trapped in languages like Java, define behaviour in terms of interfaces rather than inheritance. Interfaces are exactly what raganwald is talking about -- they define the ability of something to respond to methods, but are decoupled from any parent-child relations.


Dynamic languages like JS, Ruby and Python could greatly benefit from explicit support for interfaces as well. I'm enjoying the fact that Clojure brings this powerful abstraction to a dynamically typed language.


This might make sense for Clojure (I haven't used it) but in my years of Ruby programming (after 10 years in Java and C#) I have never thought to myself, man I'd love to have an interface here. Ruby's lack of class-level contracts is essential to its idiomatic nature, a strength not a weakness, imo.


This is short-sighted. The key point in your statement is "I". Contracts are not about "I". They are about everyone else. You have a great design and interfaces allow you to share that.


This simply makes no sense in a Ruby idiom (and I'm not really sure this is the best way to think of it in Java either -- I can make an API public and well-documented with or without interfaces.)

I've used dozens of Ruby gems, plugins and classes over the years that have well-documented shared APIs and none of them (obviously) have had interfaces. Interfaces are good in certain environments for hiding implementation details where class-level contracts are enforced, but in Ruby this would provide no benefit at all.

Think of it this way: since Ruby has no class-level contracts, to some extent everything is already an interface. Everything is abstract. Every Ruby object has the opportunity to re-implement functionality in an abstract way.


Generally speaking I like interfaces, but Ruby is so dynamic that you couldn't use them for compile-time guarantees. Interfaces could only get you a slightly nicer way of dynamic duck typing. I wouldn't really call all of the dynamism a strength though - one of the limiting factors of compile time evaluation here is that a method can later become undefined- when is the last time you needed that?


Actually, undefining methods on an object because such an idiom that Ruby 1.9 introduced a new base class into the hierarchy: BasicObject. [1] This is now the new root class, above Object.

1: http://www.ruby-doc.org/core-1.9/classes/BasicObject.html


Yeah, that is the 1 case, and ActiveSupport has had a BasicObject for a while now. This is another case where proponents would say it is great that you can undefine a method, but I would say that is a hack for lack of any namespace management. Now that there is a BasicObject I am not sure that there is any practical use for undefining a method- probably only for dealing with namespace collisions, which again seems to point out a language flaw, not a feature of dynamism.


Seems to me that Ruby modules affect the same outcome as an interface without all the overhead. Just include the module & you've got the desired behaviors and data types without all the fuss.

You get exactly the same benefits as you would from an interface (minus the rigidity)...


Missing the point entirely. I don't want your module. I want/like your design and I want to provide my own implementation.


I get what you're saying, but Ruby already allows you to do this. If you like an API/design, then go ahead and reimplement it however you like. There are no class-level contracts anywhere in Ruby that would prevent this, and therefore interfaces provide no additional level of abstraction that Ruby doesn't already have.


Define exception throwing "not implemented" methods in the module.


Override the functions in my module with your own


I've been working in C#/.NET for a while and I've started using Ruby recently for a couple of different projects at home. It's messing with my head because I'm starting to kind of hate the rigidity I'm forced to work with when I'm deciding whether I should/should not implement an interface.

That said, yes, Ruby with interfaces seems kind of pointless and is just throwing more code in, which is the opposite of one of the nicer points of the language.


Python has interfaces. There isn't even a need for it to be supported by the language. I don't see why Ruby or JS couldn't support them as well.

http://pypi.python.org/pypi/zope.interface http://docs.python.org/library/abc.html


Can you give an example of where one would want to use an interface or an abstract base class in a dynamic duck-typed language?

I looked quickly at pep-3119 and it seems like it's a short-hand way of checking if an object will respond to some set of methods.

Now, I cop to having done type checking in Ruby (either with is_a? or respond_to?) but I always felt it was a code smell, a hack to solve a distraction while I moved on to something else more pressing.

I don't know a whole lot of Python, but I'm guessing that since Python has a more Java-like form of OOP (i.e., method invocation vs. the message passing of Ruby, a la Smalltalk) this kind of type-checking is quite acceptable, even though Python is ostensibly duck-typed.

Am I wrong, and by how much?


Ostensibly, Python has a more Java-like model of method invocation, but this is really only a semantic difference. There is little that you can do with Python methods that you can't do with Ruby's methods and vice-versa.

Generally, checking the type with isinstance is considered bad practice although it can be useful sometimes (I can't think of any simple examples off the top of my head though).

The main reason I can think of for using an interface or abc is simply so that if you forget to add a required method, the exception gets thrown sooner rather than later. Plus with an abc, you also allow the base class to define methods that depend upon abstract methods (for instance, if you subclass the Mapping abc, you get __contains__, keys, items, etc for adding __getitem__). Strictly speaking, there's no reason why you can't do that without an abstract base class, but it makes it easier.

In short, there are useful reasons to have interfaces and abstract base classes in Python, but they probably aren't the same reasons you'd want them in a statically-typed language like Java.


About the only thing I use is_a? for routinely in Ruby code is to allow a particular function to accept many types of things as arguments. (An array? Sure! A single object? Why not? A single ActiveRecord id? Don't sweat it, I can do .find(id), too!)


You could even do away with is_a? there and just ask if the argument responds_to a particular method and then call the method if it does. I find myself doing more of this and less of calls to is_a? these days.


It's conceptually much cleaner, too. If you're asking about the type of something, you're almost always just trying to see if it has some method. Just ask about the method response directly.


Twisted, the Python networking framework, makes extensive use of interfaces in its API. The "Interfaces and Adapters" chapter in the documentation goes in to some detail about why you would want such a thing:

http://twistedmatrix.com/documents/current/core/howto/compon...

If you're in a hurry, the main motivations appear to be:

- declaring that a class supports particular functionality without risking the complications of multiple inheritance - the ability to register adapters from one interface to another means no more calls to isinstance() (or at least, they're abstracted away to a library function) - Easy to make your test suite check that you haven't forgotten to implement any methods of any of the interfaces your class claims to support.


"I want to say one word to you. Just one word." -- Plugins ;-)

You can pick out a set of objects that implement a certain interface from a larger set. You can do this for classes, objects, perhaps modules.

Someone already mentioned Twisted plugins. They currently use Zope interfaces. You create a class then declare that it implements a particular interfaces.


There's a difference between having a language feature (or it's possibility) and actively encouraging it. Far too many classes are bag of a implementation details with no thought for a design. In Clojure you can't just declare methods willy-nilly. You must implement methods that are a part of protocol (interface). This encourages two things, a) interoperability and b) the creation of new reusable designs- implementations rightfully take the backseat.

Haskell type classes seem, to me, to take the right approach here as well. In fact I would argue their main benefit is not catching programmer error - but encouraging thoughtful design.


By this logic every language should have C-style headers. There's nothing to prevent me from creating a bloated API/interface with or without the framework forcing me to declare it external to the class, in interfaces, headers or otherwise.

In Eclipse it's as simple as using the refactor tool to 'pull up interface' from any God Class of my choosing.

Neither language nor framework can force a coder to apply the Single Responsibility Principle. Coders can be willy-nilly with or without interfaces.


I'm curious why you want interfaces in languages with duck typing.


Aren't interfaces essentially a reification of duck typing? If it implements the duck interface, it's a duck. So the reason you'd want it is just to have a conventional way to define what a duck is, and ask if an object was one. (And you could probably implement this in about three lines of Ruby.)

Now, many languages with formal interfaces (e.g., Java) are strongly typed, and they treat interfaces as types. Thus, for example, classes have to declare up front what interfaces they implement. That doesn't fit well with the flexible attitude of a duck-typed language. However, if interfaces were just collections of method signatures that could be tested against objects at runtime, that would be perfectly compatible with duck typing.

Languages like Modula-3 that use structural type equivalence sort of do this even in a strongly-typed system. Sort of.


This presumes I ever want to ask an object what its type is, but that's generally not a good OO way to accomplish something. I'd rather tell the object what to do and assume it does the correct thing for its given type. If I must, I can always ask if it understands a given message, but that's to be avoided as well.

Interfaces are useful in defining what an implementor must provide, but an example implementation does this as well.


this.

Also, if it helps, just think of the impl as an interface with a default behaviour that should be overriden ;)

If this is too forgiving, then we can even have the impl methods throw errors ("override me!") and simulate an interface.


In Smalltalk, I can have all the methods throw

    someMethod
        self subclassResponsibility
and then use the realize class refactoring on a subclass to generate stubs for all methods I must implement, just like an interface.


I think the Ruby way wouldn't be ask the object if it's a duck, but rather to ask it responds to the fly method, or the swim method, or maybe even the quack method, etc. In other words, you don't really care if it's a duck, just if it can accept the message you're about to send it.


Not "the message you're about to send it", but "the set of messages you might send it". Duck typing is not limited to a single message.

For example, a hash-based collection might require that all its members implement an equality method and a hash method. Typically in Ruby you test this by just waiting for one of those method calls to blow up somewhere in the hash table code. Instead, you could test "acts_like?(Hashable)" -- meaning "reaponds_to?(:=) && responds_to?(:hash)" -- on insertion. Then you could give a nice, meaningful error message at a nice point in the code.

Note that this has nothing whatsoever to do with is_a?, which is the important difference between this and type-based interfaces as in Java.


Absolutely. That's exactly what I wanted to say, stated far better than I could. :-)


So you're checking if it implements an intersection of single-method ducktypes. Now if only there was a way to reify that intersection and give it a name...


... but this isn't done very often, and is generally considered a Bad Idea, so let's not encourage it with a special language construct.


What exactly is the bad idea here? Having more than one method in a duck-type?


is_a? and respond_to? are code smells.


Yes, I agree. I wasn't actually suggesting to sprinkle instanceof-checks throughout the code.

What I was trying to get to: if you accept the existence of single-method ducktypes (like "things that have a walk() method" oder "things that have a quack() method"), then by intersecting these concepts, you arrive at things just like interfaces, whether you call them interfaces or not. If you want the pond system to work with your two-legged bird-sound robot, you perform the same mental steps as when implementing an interface. Hence, ducktypes could be seen as interfaces without an instanceof-operation.


So in duck typing, a "duck interface" would just be a shortcut way of asking "is this thing I've got the kind of duck I want?" Is there a language like that? I would like that.


Because it allows programmers to define a minimum contract for behavior. Instead of having to subclass (and being implicated into a hierarchy) you can instead provide any object as long as it satisfies that interface.


Allowing a class-level contract would fundamentally alter the dynamic nature of a language like Ruby. It might be cool (although I personally don't see the benefit), but it wouldn't be Ruby.

What mechanism would enforce the contract anyway? The best you could hope for would be a runtime error thrown when the interpreter detected the incompatibility, and you can do that manually now (I do it frequently to imitate abstract classes).


  class IAnimal
    def growl
      raise
    end
  end


Yeah exactly! 'raise "not implemented"' is how I imitate abstract classes (on the rare occasion that I need them).


Although you could probably use any OOP capable language with GoF, I think the D language fits really nicely for that book. D defines interfaces, abstract classes, subtyping, (class) mixins, templates, all in the language, and has a safe override mechanism for when inheritance is the preferred approach in an OOP pattern.


This is one of the reasons why I love Go, the language is designed to make both composition (with 'embedding') and interfaces (with 'static-ducktyping') very convenient and easy, while completely avoiding the inheritance minefield.


>the abstract ontology is derived from the behaviour of the individual specimens, the ontology does not define the behaviour of the individual specimens.

In other words... Do you like Plato, or do you think Aristotle had it right?

There are lot of cool isomorphisms between philosophy (particularly metaphysics) and software design.


A Kantian view is an improvement: there is some of both sides. We bring a particular structure of perception to the external phenomena, and it matches to some degree the 'regularities' in those external phenomena.

And this describes software: we have a preconceived set of data structures and algorithms, yet we can fit them to all kinds of structures of ad hoc business uses.

We cannot simply draw all the structure from observation. We must impose something of the material which we are modelling with. The whole task of engineering, in general and in each case, is to find a balance, a practical meeting of the two.

This is not really something that OO has 'wrong' that something else can fix. OO has weaknesses, but the deeper 'problem' is never soluble: the essence of engineering design means it is always an imperfect tradeoff.


I like the direction you're going here, but I would argue this structure is not unique to Kant. You could read Plato (or any idealist -- Schopenhauer or Philo or Hegel for that matter) the same way. The ontological structures are broadly the same, the difference lies in epistemology and the nature and origin of the formal epistemological structures to begin with.

Kant locates the origin of ideals/forms/categories in the mind only, as an essential pre-existing structure of the mind (think: hard-coded ROM), where Plato located their origins in reincarnate memory (not sure what the computer analogy would be there -- recycling a motherboard at Fry's?).


I remember in high school philosophy class describing prototype-based systems as Aristotelean and class-based as Plutonic. I'm amused you've hit upon the same general idea. (For what it's worth, I like Self, apparently named after an Aristotelean philosopher with rational self-interest.)


Any chance you could expand upon this? I'd be interested to read more about this (and other) parallel/s.


I think this is a reference to Plato's Theory of Forms, in which every object we see partakes of some idealized (static, eternal, and real) form (ie, is an instantiation of a class). Except... in Plato's theory, everything is a somewhat corrupted or imperfect image of its form. The theory has a lot of problems (the third man problem, for example).

See: http://en.wikipedia.org/wiki/Theory_of_Forms


Well, yes. There's various interpretations of what kind of existence Plato thought the Forms actually had - whether he believed in a literal "realm of the Forms" or not. And while you'd be hard pressed to find someone who actually subscribes to the theory of Forms these days, there are a lot of things in, for example, the philosophy of Mathematics that smell a lot like it.

I think the idea of Forms is still quite valuable

Aristotle takes kind of the opposite tack - he is also very focused on building ontologies and taxonomies, but he views them as constructions of the intellect imposed upon their subjects, rather than being the most fundamental reality of the subject prior to its actual being. It's pretty much the same distinction ragenwald talks about in the OP.

As for the general isomorphism between metaphysics and philosophy, you can find it almost everywhere, depending how hard you want to look (and how far you want to stretch your metaphors...). But metaphysics is largely concerned with the types of things and entities in the world, and how they can possibly interact. If you think of the "software space" as its own universe, it's pretty easy to draw parallels. Forms and Categories are low hanging fruits, obviously, but a few other parallels spring to mind:

- Kant's seperation between Phenomena and Noumena relates to the difference between interface and implementation, as well as to the concept of abstractions in general.

- How a software component's "epistemology" (how it "knows" about other components in the system) works can be compared to different philosophers - do they operate on a consistently readable shared state (an empirical world?) or do they request information from a central broker service (God brokers sense impressions, ala Berkeley?)

- Berkeley's idea of direct impressions only in the mind has some relationship to the concept of laziness.

- etc.

I'll try to think up some more and post them later. It's less that you can write a paper on the startling isomorphism between theories, and more that it's really easy to use software metaphors to describe philosophy and vice versa (though software tends to be much more concrete, obviously).


"Aristotle takes kind of the opposite tack - he is also very focused on building ontologies and taxonomies, but he views them as constructions of the intellect imposed upon their subjects, rather than being the most fundamental reality of the subject prior to its actual being."

Well, not exactly, and in fact this would turn Aristotle into his metaphysical opposite: a nominalist in the mold of Occam.

For Aristotle the formal cause of a substance is in fact absolute, every bit as real and 'prior' as with Plato, the big difference between the two being the question of epistemology, or how we become acquainted with the form to begin with. Yes, Aristotle argues that we know the form of a thing from experience, but the form of that thing, and the abstractions we produce from many experiences from similar things, are as real as with Plato and, as in Plato, point to higher, more organized spiritual structures that comprise ultimate reality.

Call me when somebody comes up with a Whitehead-ean programming paradigm.


"Kant's seperation between Phenomena and Noumena relates to the difference between interface and implementation, as well as to the concept of abstractions in general."

I'm not sure Kant would allow that phenomena are all that abstract. They're not eidetic/formal (although later Husserl would argue that they intend towards eidetic objects), they're merely the subjective experience of a particular thing. The (purely hypothetical) relationship between the noumena and phenomena are (naively and hypothetically) one-to-one, if there is any real relationship at all.

The n/p distinction is made by Kant not to emphasize the abstract nature of our mental experiences, but rather to emphasize a radical epistemic uncertainty regarding the objects that we would naively assume "exists" apart from the experience.


a bit late, but thanks for the response.


On of the biggest parallels might be: Most software is not written to be understood by anyone except the author.


There is a truth behind this joke. Reading the many excited comments about parallels between philosophy and coding makes me think that another view could be needed to balance this. Western philosophy was bent toward logic, and that may be the reason we see some relations with coding, which is really related to logic. But other philosophies do not focus on logic and thus do not parralel so much with coding, or at least not the same way. I think about Chinese philosophy, which I know a bit more. Its main focus being how to help humans living together, i.e. how to civilize humans (and "is it possible?"), the above parallels vanish away, more or less.

Still some parralels can be drawn. A taoist coder would let things happen, let the code find its way spontaneously toward the result, like the knife of the butcher must find its way into the meat. Taoism applied to code would produce independant pieces of software that live by itself and may have no intended usage (e.g. the yes command in coreutils).

A confucean coder would always follow the standards (the rites), always consider his production in context of a bigger picture, would produce a software that is very strict for himself and forgiving for others, would write programs tending to improve the life of others.

It has been said that wise Chinese of the old time where confucean the day and taoist the night. While this is actually because taoism is related with sex, the parallel could be that a good hacker/coder should be confucean at his day work, and taoist for personal projects.

None of them would thing in "ontology", though, because this is something, like the idea of God, that has no deep meaning in Chinese ancient philosophies. They would not debate too much on languages and the meaning of words, they would just use them as fit. They would use OOP as a tool, when it helps writing better code (when one need to glue state and actions together).

Do I contradict myself? I said "no parallels" and then "there are some parallels". But the ones I draw just apply to any activity of any human being, not just coding.


There's some Western philosophical precedents for this tack as well. Heraclitus comes to mind, later Nietzsche, Kierkegaard. William of Occam I suppose would never allow 'classes' at all, only radically individual one-off objects.

Two modern philosophers that I think may provide a better conceptual foundation for a less reified/Platonic/Aristotelian/Formalist view would be Alfred North Whitehead and Henri Bergson, both of whom described the ingression or duration of individual substances into other substances. Rather than hard borders (interfaces?) that interact in static and unchangeable ways, substances actually change each other at these borders. Bergson in particular I think would be interesting at many levels of the SDLC -- from emergent design (he advocated an 'emergent' fingertip-feel, intuitive approach to Science) to the nature of objects and hierachies.

What about the notion that an interface can change/morph according to the client that consumes it? This would move towards a more Heraclitean/Whiteheadean/Bergsonian paradigm.


It is an interesting observation that the brittleness of inheritance hierarchies stems from the fact that they have a semantic meaning in the program and are not just logical classifications. A simpler (and well-accepted) explanation for inheritance's problems is that it is a code-reuse implementation technique that people mistakenly use for software design. I suspect these explanations are related.

I'm not sure I completely followed your ideas about hierarchical classification being more useful for testing... perhaps because using animal categorization as an example was difficult to follow since people don't write tests for animals. What does it mean to have hierarchies for tests and what benefits does it provide?


I think this problem may be while I enjoy functional programming so much. I believe that the functional programming tends to build from the data up, from the actual instances it then adds definitions of behavior, instead of trying to group behaviors and fit the various arrangements of data in that hierarchy.

Usually you start with some data. It might be some loose tuple in a dynamic language like erlang, or an algebraic data type in a language like Haskell (not that there are many like it). It's like saying, I have a Cat or I have a Bat. You haven't said anything about the two yet other than that they are different. You could then add more information, such as the number of legs or color. You add behavior by defining functions that can act on that data and create changes from them. For example, two cats can reproduce, so you define a function that takes two Cats and makes a kitten (new Cat). Bats might also reproduce, so you can write the function more generally to take any two data objects that have the minimum prerequisites (such as identical internal representations) and produces a new one that's a combination.

When a new data type is needed, you simply add it and it automatically gains all this functionality based on its form.

Essentially, OOP seems to want to start from the behaviors and have that define the data. When writing functionally I tend to define the data, then build behavior on top of it which is driven more by the form of the data than any idealized class hierarchy.

I'm not sure I'm explaining this terribly lucidly, but hopefully this can start a discussion among the more articulate posters.


Most developers that I know of that have been using OO languages for some time, including myself, also usually start with the data items being dealt with and proceed from there. In fact, it's not particularly natural to do it any other way. If you've got a window, then it's a Window class, with all of the relevant behaviors (methods) added afterwards. If you're dealing with a file, then it's a File class, and so on.


Another interesting facet of the "natural ontologies" argument is that the history of Zoology is a series of "bug fixes" where we got the ontology wrong.

Leaving out changing requirements and DNA mutations for a second, there have been massive changes to the "official" ontology as we discover new species or examine DNA and have to say "oops, turns out, the platypus doesn't derive from birds at all".


"What if objects exist as encapsulations, and the communicate via messages? What if code re-use has nothing to do with inheritance, but uses composition, delegation, even old-fashioned helper objects or any technique the programmer deems fit? The ontology does not go away, but it is decoupled from the implementation. Here's how."

This is standard OOP 101 advice. 'Prefer encapsulation to inheritance' has been axiomatic in OO literature for over a decade. GOF and all the GOF-derived texts emphasize this.

OOP <> Inheritance

Inheritance is only one tool in the OOP toolbox, and probably one of the least important. OOP itself is only one conceptual tool in the programmer's toolbox, alongside procedural and functional tools. Learn them all, learn their strengths and weaknesses, and learn how to apply the right tool to the right problem.


Design-by-Contract is protected by various business restrictions so it has nobody but itself to blame for its unpopularity

What's this about? Some patent or something?

I'd guess this is a reference to Bertrand Meyer's company. Since I learned DbC from Liskov & Guttag's excellent book, which came out before Meyer's, I never had much reason to look into Meyer or Eiffel even though they became practically identified with DbC.


I like the way the C++ FAQ explains the problems of "is a" inheritance, especially the examples of Circle-is-an-Ellipse.

http://www.parashift.com/c++-faq-lite/proper-inheritance.htm...


That's a particularly accurate, bash-you-over-the-head-until-you-get-it explanation of the problems of inheritance. Nice link. I should really read through that whole FAQ/FQA some time, it's full of good stuff.

I, personally, think of inheritance as it implies near the end: if A derives from B, A must effectively be B in all B-exposed methods. It can optimize, but it cannot change; A must pass any test B would pass, or it's not actually a B.

But lots of code violates this. I think it's because a lot of people learn inheritance with the real-world connections in mind, and it's frequently taught to beginning programmers as a golden bullet. It can do a lot, but it has subtle surprises until you get better at programming.


I couldn't agree more. To me, encapsulation is the most powerful part of OOP. I learned this from Allen Holub, and it has been part of my mantra about OOP for years. It is what makes WebObjects such a powerful web framework, and why I am happily using Grails today. This reminds me that I really need to do a screen cast on why encapsulation is such an important feature of a web framework.


The essays at the beginning of Holub on Patterns completely changed my understanding of OOP.

Implementation inheritance violates encapsulation. So do getters and setters. They let code outside your object rely on your implementation and make your system brittle.


OOP in my country is poo.

Programación Orientada a Objetos.


French: Programmation Orientée Objet


Out of curiosity whats poo? or more explicitly what (if there is one) is the common, non vulgar, childish version of mierda?


The answer to your second question is poop.


caca, even more childish would be caquita.


Fun thing is that "object oriented programming", in French, is "programmation orientée par objets", which is usually shortened to "POO" (pronounced Pay-Oh-Oh).


The observations are an interesting expansion on the idea of Duck typing.


Indeed, duck typing and structural typing are what immediately came to mind as a solution for the brittleness of inheritance hierarchies that the OP describes.


I think you've hit on something very important, essential, in the problem with OO-as-taught. This is something that is known in many quarters of skillful practictioners, but missed by most schools or beginners. I think the first part of this article is misleading, in that it waxes philosophical about the drawbacks of defining things in an ontology. But then it then proceeds to figure out a better way to model things as an ontology! The issue isn't with ontologies -- yes, slapping a hierarchy or meronymy on something isn't perfect, but it's pretty useful when organizing concepts.

The fundamental problem with OO inheritance, is that it is not ontological or mathematical inheritance! OO inheritance has almost no relationship to thousands of years of what "is-a" means -- i.e. subset. It's backwards - subclasses are extensions, or supersets of their parents.

The way to implement inheritance correctly is "specialization by constraint", i.e. you define a subclass by the constraints it has on its superclass. This is what's happening in the latter part of the article, with design-by-contract and test cases for your various classes. This is also, by the way, how ontology languages like OWL work, and in part why they're so hard to learn/use -- their notion of inheritance and "subclassing" is the mathematical notion (i.e. more like subset), and the opposite of OO programming languages, so we have 25+ years of history to unlearn.

I suspect the reason OO inheritance took the path it did was for simple expediency on the part of compiler writers & language designers. Specialization by constraint ain't easy to implement declaratively.


I don't get how subclasses are supersets of their parents. The set of instances of a subclass is a (usually proper) subset of the set of instances of its superclass (in any mainstream class-OO language).

Also, writing a subclass definition can be seen as adding constraints (of the form "must also have a bla() method returning an int").


Is this an argument for prototype-based programming? I don't understand all this stuff well enough to provide a real opinion, but it seems to me that is a solution to the issues he brings up.


Yeah, it seems that prototype-based programming fits his needs better. With prototypal inheritance you make no assumption that what you inherit from is a strict superset of the class. It's more like saying "This object is like so-and-so, except..." every time you create a link in the inheritance chain, and that fits the way people classify things a little better. Steve Yegge put it best: "The most specific event can serve as a general example of a class of events."

http://steve-yegge.blogspot.com/2008/10/universal-design-pat...


    Steve Yegge put it best.
He often does.


Not sure what abstract ontologies have to do with expressing instructions for my computer to execute. I use inheritance as a tool to share behavior across objects. It works nicely for that.


Object Oriented Programming Systems are (abbreviated) mistakes.


I wonder, did the people downvoting this not see the joke, or did they see the joke and decide it wasn't funny?


This is roughly how I program using OCaml's module system. Does anyone else prefer higher-order modules as the basic unit of program structure?


thinking OOP as a solution may lead to an article like this whereas it merely is a vehicle. you may or may not use it.

problems of OOP is explained in Go Language video by Rob Pike*, however very intellectually stated arguments in this article are mainly false.

http://www.youtube.com/watch?v=rKnDgT73v8s


Many of the languages I use already in an object oriented style use message passing/signatures not compiled in mandatory type hierarchies.

Python, Objective-C, Smalltalk and even C++ in some ways all do this.

I suggest if you feel like the OP does, you use one of them instead of java or C# or whatever that dude is using.


And "Functional Programming" is an anagram for "Malfunctioning Program". Yay for rants.


It's actually called POO in Spanish: Programación Orientado a Objetos


The obvious solution is to stop getting OOP backwards, then.


Perl object-oriented programming = POOP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: