Hacker News new | past | comments | ask | show | jobs | submit login

>> Let's talk about that in terms of a concrete example: the big inductive bias of CNNs for vision problems is that CNNs essentially presuppose that the model should be translation-invariant. This works great — speeds up training and makes it more stable – until it doesn't and that inductive bias starts limiting your performance, which is in the large-data limit.

I don't know about that, I'll be honest. Do you have a reference? I suspect it won't disagree with what I'm saying, that neural nets just can't use strong enough bias to avoid overfitting. I didn't say that in so many words, above, but that's the point of having a good inductive bias, that you're not left, as a learner, to the mercy of the data.

>> Someone who comes at things from the perspective of mathematical logic is going to find that worldview very weird, I suspect.

No that's absolutely a standard assumption in logic :) Think of grammars; like Chomsky likes to say, human language "makes infinite use of finite means" (quoting Wilhelm von Humboldt). Chomsky of course believes that human language is the result of a simple set of rules, very much like logical theories. Personally, I have no idea, but Chomsky consistently and even today pisses off all the linguists and all the machine learning people, so he must be doing something right.

Btw, I'm not coming from the perspective of mathematical logic, only. It's complicated, but, e.g. my MSc was in data science and my PhD in a symbolic form of machine learning. See, learning and logic, or learning and reasoning, are not incompatible, they're fundamentally the same.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: