Hacker News new | past | comments | ask | show | jobs | submit login

I think - like a lot of media reporting on the space - this overgeneralizes (heh) artificial intelligence. The predictive aspects of ML have been in use in modern militaries for _decades_, and the opening graf handwavely indicates that an LLM was a bigger chunk of the perceived intelligence failure of the October 7 attack.

That an LLM is a part of a system that includes a large amount of ML is not surprising. It's a great human interface. Do I for a second believe that it played a much larger role, such to be implied as responsible in any non-negligble way for missing the attack. Of course not.

My point here is that ML continues to play a role, ML continues to both succeed and fail, and ML will continue to be imperfect, even moreso as it competes against adversarial ML. Blaming imperfect tools for inevitable failures is not a useful exercise, and certainly not a "problem" considering the alternative being even more failure-prone humans.




Part of the ongoing confusion, in my opinion, is that we as an industry leaned full into calling LLMs artificial intelligence.

The phrase AI has much more weight behind it than what we give it credit for, and using the term for LLMs cheapens it.

The average person hears AI and expects much more than an algorithm that can attempt to predict and mimic human written word, no matter how clever or impressive it is.

As an industry we seem to have agreed to call the next round of machine learning algorithms "artificial intelligence" because it sells better and raise a hell of a lot of funding. What does that to the very real safety, moral, and ethical questions that need to be asked before we actually create an AI?


Are you unaware that the field has been called AI for decades?


Language models weren't considered "AI" until very recently.

Research, really theory, in the area of AI has been around for decades but focused on artificial intelligence rather than how to weigh and compress massive amounts of written language to used by a text predictive algorithm.


Natural Language Processing is a long-standing area of research in the field and, though it hasn't always been based on ANNs, ANNs have themselves also long been considered AI regardless of application.


My understanding has always been that language processing, language models, etc. have long been considered a necessary prerequisite to AI and research was often done as part of the AI field but was never itself considered AI in isolation.

Calling LLMs artificial intelligence is either (a) cheapening the meaning if intelligence, (2) embellishment for the sake of fund raising, or (d) subtle acknowledgement of vastly more powerful systems behind the LLM tools than is currently being publicly described.


>(a) cheapening the meaning if intelligence

It's hard to cheapen it more than perceptrons and expert systems. The lay impression of artificial intelligence may be all skynet and c3p0, but AGI isn't really even a goal of most AI research, let alone representative of the current state of the art.


What is your definition of artificial intelligence?

OpenAI has an explicit goal of developing AGI for the "greater good," whatever that means. If LLMs are indeed AI, as many assume, then OpenAI would fall squarely in the space of AI research that is the current state or the art.



> (a) […] (2) […] (d) […]

Looks like the kind of error a low-parameter LLM would make


More like and error Buzz McAlister would make when describing how boring the street he lives on is.


My read is they're complaining about the conflation of LLMs with AI in general.


Blaming the excessively grand claims that were made for those tools, however, is absolutely a useful exercise.


But grand claims made by technologists are nothing new. Certainly I don’t know, Ive never been in the military, but aren’t people always trying to sell The Next Big Thing to the military? Is it not the responsibility of those in charge to evaluate the capabilities and limitations of new systems being integrated into their forces? If someone said “we dont need the rigor we used to have anymore, we have AI” I see that as a failure of the org, not an indictment of the claims being put forth by boosters.

Corporate Decision Maker #2, sure, theyll get hoodwinked. They and their company may have only 50 years of experience and institutional memory to draw on. But State Militaries? What excuse do they have? War changes, but the armed forces have a long memory, and their poor decisions cost lives. Maybe Im off base, but I would expect each mistake to be an opportunity to learn for that industry. The industry has had plenty of lessons learned over the past 100 years. Why is the latest hype cycle to blame, and not those whose job it is to ensure they maintain capabilities and extensively game out scenarios and responses?

Bad bets on tech happen even in institutions with lifetimes of history to draw on, but I see that as a failure of the institution, not on the completely mundane hype cycles which occur naturally.

Obviously mistakes happen, and maybe thats what the article is getting at. But if we’re going to point fingers (not saying you are) then lets not let decision makers off the hook whose job is to prevent that hot new thing getting their people killed.


Yes. It is a military maxim you will lose if you want to fight the next war with the tactics and equipment from the last war. Your future opponents have been studying the last war and have invented all kinds of ways to destroy you if you use the same tactics again.

Modern military doctrine can be attributed to the Prussian General staff that defeated Napoleon III in the Franco Prussian war. Moltke the Elder was in charge of the Prussian army at the time. Moltke the Elder was a student of Clausewitz who literally wrote the book on modern strategy. But Clausewitz when he was in active service was not some world beating general. Clausewitz fought for the Prussians during the Napoleon’s time and was actually at one point a prisoner of Napoleon. Clausewitz and his boss Scharnhorst spent the rest of their careers developing a scheme to defeat Napoleons’ tactics of massive concentration at a single point. They developed modern combined arms with a logistical backbone of railroads.


Doing so in all seriousness would collectively wipe trillions off the valuations of companies and reduce peoples net worths.


It would also redirect resources towards boring stuff like manufacturing, that actually increases real wealth. But you're right, the fact that so much of our theoretical wealth is in hype, and there's a lot of people who don't want that brought down to more realistic valuations, is what's driving this.

But, you can look at the Chinese real estate market for an example of what happens if you try to keep inflating the bubble for too long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: