Hacker News new | past | comments | ask | show | jobs | submit login

>> No, we cannot model what our brains do with kinematic equations.

I've confused you. My apologies. What I meant with this sentence:

"we can model whatever our brains do with kinematic equations"

Was that we can model whatever our brains do _while catching a ball etc_ by means of kinematic equations. I did not mean that we can model everything our brains do, i.e. the function of the brain, in general. If we could model an entire brain just by kinematic equations we wouldn't need any AI research, and I wouldn't be arguing that we don't know what our brains do when they solve problems that we solve using kinematic equations. Our disagreement is about the solutions our brain finds to that kind of problem.

>> Not only that, but our brains can do this in two completely different ways, one of which is conscious and deliberate (what we call "doing math") and the other of which is instinctive and subconscious (developing sensory-motor skills).

That's my problem with all this - the "subconscious" part. I don't really understand what it means. When I catch a ball, I do it entirely consciously, and I know exactly what I'm doing: I'm extending my hand to catch the ball. I may not be able to articulate every little muscle movement, or describe precisely the position of my arms, my hand, my fingers, the ball, etc, but I do know with great accuracy where those objects are in space, and where they are in relation with each other. I cannot introspect into the intellectual mechanisms by which I know those things, but I do know them, so they're not "subconscious".

The difference you point out, between doing maths with pen-and-paper (or computers) and performing a task without having to do maths-with-pen-and-paper, is, I think, the difference between having a formal language that is powerful enough to describe all the objects and functions I describe above (hand position, muscle movement etc), on the one hand, and not having such a language on the other hand. Somehow humans are able to come up with formal languages with the power to describe some of the things we do, like catching balls etc, and many other things besides. As a side note, we do not have a formal language -we do not have the mathematics- to describe our ability to come up with formal languages, yet. That was be one of the original goals of AI research, although it has now fallen by the wayside, in the process of chasing benchmark performance.

I digress. When I speak of "formal languages", I mean more broadly formal systems, like mathematics (of which logic is one branch, btw). When I speak of a "model" in my earlier comment, I mean a formalism that describes various kinds of human capability, like our catching-balls example. Kinematic equations, that's one such model. But a model is not the thing it, well, models. Is my claim.

I hope this is clear and apologies if it's not. Most of our discussion is not on things of my expertise so I'm trying to find the best way to say them. Also, this is a much less technical discussion and so much less precise, than I'm used to. I hope I'm not wasting your time with needless philosophising.

On the other hand, I think this kind of conversation would be made much easier if we didn't assume human brains. Our ability to navigate, and interact with, our environment, is shared to a greater or lesser extent with many animals that aren't humans and don't have human brains, so whatever we can do with our brains thanks to that shared ability, must also share an underlying system- because we all evolved from the same, very distant, animal ancestors, ultimately, and we must have inherited the same basic firmware as it were.




> Was that we can model whatever our brains do _while catching a ball etc_ by means of kinematic equations.

No, we can't even do that. All we can do is observe that the results of what our brains do happen to be the solutions to kinematic equations. It does not follow that we can model the process of producing those solutions by kinematic equations. It does not even follow that the process of producing those solutions bears any resemblance to what we do when we do math to find them.

Here is an analogy: we can observe that the motions of objects obeys the principle of least action [1] and that to compute the action we have to integrate the Lagrangian. It does not follow that there is anything happening in the physical mechanism that causes particles to move that is even remotely analogous to integrating a Lagrangian.

> When I catch a ball ... I know exactly what I'm doing

No, I don't think you do. If you did, you would be able to describe what you are doing to someone else, and they would be able to reproduce your actions based on that description alone. Alternatively, you would be able to render your knowledge into computer code and build a robot that could do it. But I doubt you can actually do either of those things if your only skill is catching a ball and you are not trained in math.

By way of very stark contrast, I am absolutely terrible at hand-eye coordination tasks, but I can build a machine that is much better at it than I am [2]. Just to be clear, I didn't actually build that particular machine, but I do know how. And so I can tell you that the process of learning how to build a machine that can catch a ball is radically different than the process of learning how to catch a ball yourself.

---

[1] https://en.wikipedia.org/wiki/Stationary-action_principle

[2] https://www.youtube.com/watch?v=FycDx69px8U


Sorry for the lag. Productive day yesterday and today my friendly neighbourhood rock band was in a great mood early in the bloody morning.

>> No, we can't even do that. (...)

OK well I'm very confused. I thought our disagreement was on whether our brains actually calculate actual kinematic equations, or just the same results by some other means. It feels to me like we're arguing the same corner but we don't have a common language.

>> No, I don't think you do. (...)

"I can't put my finger on it, but I know it when I see it". My claim is that there is a difference between tacit knowledge, and articulable knowledge. I can not articulate the knowledge I have of how I am catching a ball; but I certainly know how I catch a ball, otherwise I wouldn't be able to do it. In machine learning, we replace explicit, articulable knowledge with examples that represent our tacit knowledge. I might not be able to manually define the relation betwen a set of pixels and a class of objects that might be found in a picture, but I can point to a picture that includes an image of a certain class and label it, with the class. And so can everyone else, and that's how we get tons of labelled examples to train image classifiers with, without having to know how to hand-code an image classifier.

Here's a little thing I'm working on. Assume that, in order to learn any concept we need two things: some inductive bias, background knowledge of the relevant concepts; and "forward knowledge" of the target concept. In statistical machine learning the inductive bias comes in the form of neural net architectures, function kernels, Bayesian priors etc. and the knowledge of a target concept comes in the form of labelled examples. Now, there are four learning settings; tabulating:

  Background    Target      Error
  ----------    --------    -----
  Known         Known       Low
  Known         Unknown     Moderate
  Unknown       Known       Moderate
  Unknown       Unknown     High
Where "Error" is the error of a learned hypothesis with respect to the target theory. In the first setting, where we have knowledge of both the background and the target, and the error is low, we're not even learning anything: just calculating. We can equally well match the first three settings to deductive, inductive, and abductive reasoning. You can also replace "known" and "unknown" with "certain" and "uncertain".

Now, I'd say that the invention of kinematic equations by which we can model the way we move our hands to catch balls etc is in the setting where the background theory and the target are both known: the background being our theory of mathematics, and the target being some obsrvations about the behaviour of humans catching balls. I don't know if the kinematic equations you speak of where really derived from such observations, but they could have. Humans are very good at modelling the world in this way.

We're in deep trouble when we're in the last setting, where we have no idea of the right background theory nor the target theory. And that's not a problem solved by machine learning. We only make progress in that kind of problem very slowly, with the scientific method, and it can take us thousands of years, during which we're stuck with bad models. For 15 centuries, the model is epicycles, until we have the laws of planetary motion and universal gravitation. And, suddenly, there are no more epicycles.

This also adressses your earlier comment about betting against a scientific upheaval in the science of computation.

Cool machine, btw, in that video. So you're a roboticist? I work on machine learning of autonomous behaviour for mobile robotics.


> Sorry for the lag.

No worries.

> It feels to me like we're arguing the same corner but we don't have a common language.

That's possible. It's actually a deep philosophical question. Do planets "solve Newton's equations of motion" when they move? On the one hand, they move in ways that correspond to solutions to those equations, and so one could say that they "find solutions" to those equations. On the other hand, the process by which they do this is pretty clearly radically different than what a mathematician does when they solve equations.

> So you're a roboticist?

I used to be. I've been out of the field for over 20 years now. But back in the day I was pretty well known.


>> That's possible. It's actually a deep philosophical question. Do planets "solve Newton's equations of motion" when they move?

Yes, that's an interesting question- that I'm really not equipped to answer. Probably for the best.

>> I used to be. I've been out of the field for over 20 years now. But back in the day I was pretty well known.

I'm really new to the field so I don't know your work. In fact I wouldn't even say I am in the field as such. An academic sibling suggested I take a post doc job and now I'm collaborating with roboticists. I'm just working on autonomous behaviour- I'm not allowed near hardware.

It's an interesting field although I have to constantly be on my toes to avoid violating my principles. See I'm a peacenick, but it seems with the work I do, as soon as I got that stuff working, someone will want to put it on a drone, strap a gun on its back and send it to kill people. And I'm dead set against that sort of thing.

I had a quick look at your site and you've worked with NASA. Respect! We can send autonomous rovers to explore far away planets and people want to keep them here wreaking havoc and death. Unbelievable.

Do you have any pointers to your work? Something you are really proud of that you did in the past? I'm curious.


This is what I was mainly known for:

https://en.wikipedia.org/wiki/ATLANTIS_architecture

https://flownet.com/gat/papers/tla.pdf

I think I'm most proud of this:

https://link.springer.com/article/10.1007/BF00710855

https://flownet.com/gat/papers/tpesamr.pdf

though it didn't make nearly as much of a splash.

I was also the tech lead on the New Millennium Deep Space One Remote Agent Executive, which sounds cool, but was really kind of a disaster. See:

https://www.youtube.com/watch?v=_gZK0tW8EhQ

if you want the gory details.


Cheers! I'll have a look :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: