Sorry for the lag. Productive day yesterday and today my friendly neighbourhood
rock band was in a great mood early in the bloody morning.
>> No, we can't even do that. (...)
OK well I'm very confused. I thought our disagreement was on whether our brains
actually calculate actual kinematic equations, or just the same results by some
other means. It feels to me like we're arguing the same corner but we don't have
a common language.
>> No, I don't think you do. (...)
"I can't put my finger on it, but I know it when I see it". My claim is that
there is a difference between tacit knowledge, and articulable knowledge. I can
not articulate the knowledge I have of how I am catching a ball; but I certainly
know how I catch a ball, otherwise I wouldn't be able to do it. In machine
learning, we replace explicit, articulable knowledge with examples that
represent our tacit knowledge. I might not be able to manually define the
relation betwen a set of pixels and a class of objects that might be found in a
picture, but I can point to a picture that includes an image of a certain class
and label it, with the class. And so can everyone else, and that's how we get
tons of labelled examples to train image classifiers with, without having to
know how to hand-code an image classifier.
Here's a little thing I'm working on. Assume that, in order to learn any concept
we need two things: some inductive bias, background knowledge of the relevant
concepts; and "forward knowledge" of the target concept. In statistical machine
learning the inductive bias comes in the form of neural net architectures,
function kernels, Bayesian priors etc. and the knowledge of a target concept
comes in the form of labelled examples. Now, there are four learning settings;
tabulating:
Background Target Error
---------- -------- -----
Known Known Low
Known Unknown Moderate
Unknown Known Moderate
Unknown Unknown High
Where "Error" is the error of a learned hypothesis with respect to the target
theory. In the first setting, where we have knowledge of both the background and
the target, and the error is low, we're not even learning anything: just
calculating. We can equally well match the first three settings to deductive,
inductive, and abductive reasoning. You can also replace "known" and "unknown"
with "certain" and "uncertain".
Now, I'd say that the invention of kinematic equations by which we can model the
way we move our hands to catch balls etc is in the setting where the background
theory and the target are both known: the background being our theory of
mathematics, and the target being some obsrvations about the behaviour of humans
catching balls. I don't know if the kinematic equations you speak of where
really derived from such observations, but they could have. Humans are very good
at modelling the world in this way.
We're in deep trouble when we're in the last setting, where we have no idea of
the right background theory nor the target theory. And that's not a problem
solved by machine learning. We only make progress in that kind of problem very
slowly, with the scientific method, and it can take us thousands of years,
during which we're stuck with bad models. For 15 centuries, the model is
epicycles, until we have the laws of planetary motion and universal gravitation.
And, suddenly, there are no more epicycles.
This also adressses your earlier comment about betting against a scientific
upheaval in the science of computation.
Cool machine, btw, in that video. So you're a roboticist? I work on machine
learning of autonomous behaviour for mobile robotics.
> It feels to me like we're arguing the same corner but we don't have a common language.
That's possible. It's actually a deep philosophical question. Do planets "solve Newton's equations of motion" when they move? On the one hand, they move in ways that correspond to solutions to those equations, and so one could say that they "find solutions" to those equations. On the other hand, the process by which they do this is pretty clearly radically different than what a mathematician does when they solve equations.
> So you're a roboticist?
I used to be. I've been out of the field for over 20 years now. But back in the day I was pretty well known.
>> That's possible. It's actually a deep philosophical question. Do planets "solve Newton's equations of motion" when they move?
Yes, that's an interesting question- that I'm really not equipped to answer. Probably for the best.
>> I used to be. I've been out of the field for over 20 years now. But back in the day I was pretty well known.
I'm really new to the field so I don't know your work. In fact I wouldn't even say I am in the field as such. An academic sibling suggested I take a post doc job and now I'm collaborating with roboticists. I'm just working on autonomous behaviour- I'm not allowed near hardware.
It's an interesting field although I have to constantly be on my toes to avoid violating my principles. See I'm a peacenick, but it seems with the work I do, as soon as I got that stuff working, someone will want to put it on a drone, strap a gun on its back and send it to kill people. And I'm dead set against that sort of thing.
I had a quick look at your site and you've worked with NASA. Respect! We can send autonomous rovers to explore far away planets and people want to keep them here wreaking havoc and death. Unbelievable.
Do you have any pointers to your work? Something you are really proud of that you did in the past? I'm curious.
>> No, we can't even do that. (...)
OK well I'm very confused. I thought our disagreement was on whether our brains actually calculate actual kinematic equations, or just the same results by some other means. It feels to me like we're arguing the same corner but we don't have a common language.
>> No, I don't think you do. (...)
"I can't put my finger on it, but I know it when I see it". My claim is that there is a difference between tacit knowledge, and articulable knowledge. I can not articulate the knowledge I have of how I am catching a ball; but I certainly know how I catch a ball, otherwise I wouldn't be able to do it. In machine learning, we replace explicit, articulable knowledge with examples that represent our tacit knowledge. I might not be able to manually define the relation betwen a set of pixels and a class of objects that might be found in a picture, but I can point to a picture that includes an image of a certain class and label it, with the class. And so can everyone else, and that's how we get tons of labelled examples to train image classifiers with, without having to know how to hand-code an image classifier.
Here's a little thing I'm working on. Assume that, in order to learn any concept we need two things: some inductive bias, background knowledge of the relevant concepts; and "forward knowledge" of the target concept. In statistical machine learning the inductive bias comes in the form of neural net architectures, function kernels, Bayesian priors etc. and the knowledge of a target concept comes in the form of labelled examples. Now, there are four learning settings; tabulating:
Where "Error" is the error of a learned hypothesis with respect to the target theory. In the first setting, where we have knowledge of both the background and the target, and the error is low, we're not even learning anything: just calculating. We can equally well match the first three settings to deductive, inductive, and abductive reasoning. You can also replace "known" and "unknown" with "certain" and "uncertain".Now, I'd say that the invention of kinematic equations by which we can model the way we move our hands to catch balls etc is in the setting where the background theory and the target are both known: the background being our theory of mathematics, and the target being some obsrvations about the behaviour of humans catching balls. I don't know if the kinematic equations you speak of where really derived from such observations, but they could have. Humans are very good at modelling the world in this way.
We're in deep trouble when we're in the last setting, where we have no idea of the right background theory nor the target theory. And that's not a problem solved by machine learning. We only make progress in that kind of problem very slowly, with the scientific method, and it can take us thousands of years, during which we're stuck with bad models. For 15 centuries, the model is epicycles, until we have the laws of planetary motion and universal gravitation. And, suddenly, there are no more epicycles.
This also adressses your earlier comment about betting against a scientific upheaval in the science of computation.
Cool machine, btw, in that video. So you're a roboticist? I work on machine learning of autonomous behaviour for mobile robotics.