Yeah I occasionally take it or tadalafil for BPH/prostatitis because they have less side effects than alpha blockers. I've got GERD and a hiatal hernia and, man, it really fires it up. Chewing gum sort of helps. I also only take half of a 25mg. I'll sometimes get the stuffy nose and rarely a headache but the stomach issues almost prevent me from using it. I've sort of grown used to tolerating extreme stomach pain though, so I still use it if I really need it.
Man. Based on personal experience at my family. I bet you're carefully keeping tabs on your stomach with a doctor, but if not, don't grow used to tolerate extreme stomach pain.
I've seen stomach cancer in my family, and I would like to encourage you do whatever you can to minimize the risk for you.
Yeah, I know. I need to get another checkup but I've been through the cycle many times.. Get scoped.. Get told to take PPIs.. they make it worse.. At some point I just decided to manage it myself.
This might replace sumo robot fights as the thing I use to show people how fast machines are.
Like, seriously, I don't think most people can comprehend the speed of robots, much less the speed of the processing controlling them. I think it's one of those things you should just intuitively understand if you're living in the modern world.
If the robots ever do rise up, and I'm not saying they will, you won't see it coming!
If the battlefield comes to be dominated by robots, face recognition will be useless? No human will be around to have her face recognised.
Detecting heat via infrared will still be useful, any kind of engine gives off heat. Whether biological or mechanical.
You can construct engine that have disguise their heat signature a bit, or that have a smaller heat signature. But that severely limits their capabilities, which might be a good enough outcome for the sides that use the heat detection.
The battlefield will always be where the people are until all industrial capacity is fully automated (if ever). Why would a robot army that finds itself at a disadvantage ever attack a superior army out in a field somewhere far from strategic targets? They will focus their attacks on logistics, manufacturing, C&C, and any civilian population that can actually influence enemy politics.
It’d be nice if all wars were basically a simulated conflict with robots fighting each other far from any humans but the defector that turns their robots on human populations will always have an advantage in actually winning wars.
I remember reading a sci-fi story - kind of an echo of ender’s fame - where that was the precise setting, smart kids being raised to compete against other nations in what were essentially hyper-realistic RTS games as a proxy for actual wars. I don’t remember if it had the same twist as ender’s game, but maybe it did? Man I should try to dig that up again.
Due to defence keeping them from strategic targets. Same reason large parts of human wars today occur in trenches in the middle of nowhere (witness Ukraine).
Those trenches aren’t in the middle of nowhere. They’re dug around cities and other strategic targets. The fights in the middle of nowhere are fought by mobile combat units.
Besides, these are wars of attrition where killing off the young men who fight wars is the entire point. A robot that takes a few months to manufacture instead of 18 years to raise changes the calculus entirely.
This whole thread is fun to think about, but misses something.
War is largely about fear / intimidation. Yes, an RTS-like "destroy the assets" is how it's abstracted, but ultimately it's about intimidating a leader and population into submission. Keeping the attackers away from cities is very much part of that calculus, as is dropping long-range attacks on those cities.
If both sides have robots that take months to manufacture, the goal would still be the same: "Keep their robots away" and visa-versa "Get into their population centers and seize power symbols". At this stage, with established defenders, the goal seems to be "seize ground yard by yard"
And "outproduce them" aka "grind down their will" is still going to be a viable strategy.
In some sense, a robot fighting force will be a sort of Next Generation Neutron Bomb (TM). It will have the capability to enter the population center of a non-peer opponent and sever communications and secure key locations for immediate occupation by friendly force hoominz - but entirely without the muss & fuss of kinetic destruction or the toll in souls of massed gunfire.
Of course this kind of scenario was the fantasy outcome of the lightning win over and occupation of Iraq, with "thunder runs" and such, but in the longer term it didn't work out that way.
To be fair, the Racism / Xenophobia component is always alive. AFD does exist in Germany, and Trump can get away with saying 'poisoning the blood' (He later said he didn't copy it from Hitler but he didn't apologize)
As I understand it, Racism was a strong motivator in the propaganda. It was part of Hitler's narrative even before he was in power (something something culture destroyers something something parasites lorem ipsum)
Only recently have I noticed that some groups support a Theory that downplays racism because people are obediently blind. I have seen racism and xenophobia and know it is neither obedience nor blind. But as to the extend of the power it held in the Germany of 1930, I have only read about it.
I think it's far more complicated than "racism / xenophobia".
Hitler had delusions about "Aryan" race, white blond people, even though he was not blond. Also, the war was mostly fought in Europe (or at least started in), i.e. mostly among people of the same race.
It couldn't have been "xenophobia" either, given he wasn't even German!
A lot of people who were sent to the concentration camps, besides Jews, were Roma (Gypsies), gays [1], Slavic people, probably more.
I havne't studied history that deeply, maybe this talk about "undesirables" was all just propaganda, conveniently constructed to help fulfill military goals, but it's clearly far from neatly fitting "racism / xenophobia".
Russia literally complained about Ukraine putting its military installations in civilian centers rather than putting them in the middle of nowhere (where they'd be more exposed and easier to destroy). "Human shields" have been a consistent talking point by Israel in its attacks on Gaza despite IDF infrastructure likewise being in civilian areas.
Most wars today don't occur in trenches in the middle of nowhere. Actually the most recent thing I can think of is medieval battlefields but even then a major component of warfare were sieges which targeted entire cities because it didn't make sense to have your military fortress out in the sticks where it was easy to cut off the supply lines. Even World War 1 doesn't count because the "middle of nowhere" where the trenches were were often only uninhabited because of the war.
That said, we won't see wars of Terminator-style killing machines pitted against each other just like we don't see genuine tank-on-tank duels anymore. It's far cheaper to put some explosives on a UAV and call it a day. Any evenly matched war between nations capable of producing battle robots is likely one between nations with access to nuclear bombs. If Indian border conflicts are any indication, those wars are more likely to be fought with literal sticks to avoid any action that could trigger a nuclear first strike.
> There will be no serious international wars anymore.
The Russo-Ukrainian war seems pretty serious.
> The loser would go nuclear.
If annihilation was viewed better than even unconditional surrender, unconditional surrender would never have happened in the past. But it has, and thus if there is a credible marginal threat of nuclear retaliation for a nuclear strike, there is very good reason to suspect that the loser in major convential war would not go nuclear. The risk of nuclear escalation of course impacts the calculus of war involving one or more nuclear powers, but a firm statement that “the loser will go nuclear” does not seem justified, except perhaps in the case where the otherwise winning side is not, and would not (at least in the perception of the losing nuclear power) in the event of nuclear attack be protected by, a nuclear power.
> Africa doesn't count because those countries don't have nuclear bombs
The vast majority of non-African countries also don't have nuclear bombs.
it depends if loss will be significant enough to justify mutual annihilation.
Assume Russia attacked Finland, and NATO started military operation and lost. It will be very unlikely France, Brits and Americans will launch nukes for Finland loss.
Thanks for pointing out that such conflict must be considered serious. Maybe 500,000 Russians and 70,000 Ukranians have died.
Instead of "serious" I wanted to say "With serious possibility of escalation" I mentioned Asymmetric conflicts, as the two conflicts occupying our international News (Gaza & Ukraine) are good examples
I don't have any foundation to have an opinion on an invasion to Finland. I would expect there was a possibility of escalation as that is the only purpose of belonging to NATO. I would expect nobody to escalate over a Taiwan Invasion.
I think China won't seriously threaten Indian borders, just based on having nuclear weapons or not. (An opinion hanging of a spider threat)
> I would expect nobody to escalate over a Taiwan Invasion.
there is semiconductor industry on the table. I think there is high chance NATO will be suppressing invasion forces through launching anti ship missiles from aircrafts and cruisers as well as secretly supplying them to Taiwan.
This kinda plays out already - where not every "side" has a military or soldiers, so the battle is fought between soldiers and "civilians".
Any battle between a state with drone invading forces and one without, is going to be indistinguishable from an invading robot army indiscriminately killing all the civilians.
And in the next invasion of Afghanistan/Iraq/Canada the local resistance will end up dressed as civilians (either duplicitously or as a consequence of there not being any military left with supply chains of uniforms) - and the actual civilians then all get targeted by the robots.
It's not pretty, but war never is. I am surprised at how people today can point at a war and be surprised that atrocities happen. World Wars, Korea, Vietnam let alone the immediate history of Israel. Serbia and Croatia anyone?
It's not like we don't have plenty of historical sources. War is bad business and trying to claim that "civilians" should be exempt is not fooling anyone who's had even a precursory glance over such material.
Yeah I've said for years that "stabby the robot" drone is only as far away as the solution to the power problem. You don't even need AI to locate a jugular. Plain old computer vision and thermals will enable a slicing robot. Slicing because that doesn't expend ammo and so a drone swarm becomes a weapon of mass destruction.
Indeed. Even a pretty mediocre modern microcontroller is capable of incredible feats of computation and speed, doubly so if you glue it to an FPGA, even a cheapo one. The fact that each is probably a few mm across and costs almost nothing just adds to it. Many analogue devices and DSP systems would be downright supernatural if you showed it to an engineer in the 70s.
99% of computing power is used for "make work"¹ (graphics, teetering stacks of abstraction and now AI) so things don't really feel different to humans on a desktop level other than "shinier, drop shadows and in 4k I guess?", but the actual capabilities of computers are virtually unlimited in the context of some tasks.
If the robots turn against us and they don't need to use all their cycles on the abstractions and other human frippery, then we're really in trouble. A true AGI will know how to wring everything out of a scrap of silicon and human engineers will be wondering how a program that looks like random noise and fits in a STM8 can possibly be the controller of a captured drone, right before they get headshotted with a ball bearing fired by a passing drone at 1000 feet that picked their heartbeats out of the ambient soundscape or something.
Humans' best defense then would be somehow hide behind something computationally intractable where the AI couldn't use it's raw computing power. I'm not really sure what that would be, though (if I were, I'd probably write a novel!).
¹: well technically all human endeavour is make work, so this isn't meant as a slight, though I have some opinions on the state of modern software, just that the vast majority of the cycles aren't doing the core thing you're trying to use the computer to do. For example a graphical calculator program may be running the thick end of a hundred million instructions to run a handful of actual ALU ops.
And yet, there are no robot soldier there yet as a chance to make Ukraine win. Robotics is still very much in its infancy, meaning a lot of potential, but robots don't have enough situational awareness, are not silent enough, don't have enough battery, rendering legged robots useless. Even drones still need to be connected to a central server. There are no drones doing edge AI, meaning they are very much susceptible to electronic warfare, breaking the link.
Robot soldiers don't look like humans for the same reasons that bulldozers don't look like Shaq holding a shovel.
Robots that would win the Ukraine war would look like a barrage of drones or missiles (either stealthy or in overwhelming numbers) flying into the air defence radars. There are 100m-wide radio dishes in orbit, the exact location and type of every radar on earth is known. Followed by standoff hammering with precision artillery (both the guns and the shells are fundamentally robotic) and lots more drones and missiles.
The this hasn't happened seems more a question of not revealing capabilities the US feels it might one day need as a trump card. Combined with not wishing to aggravate things to much (they say) or, cynically, not wishing to let the war end until the Russians are bled dry. Don't want them to capitulate while still in possession of anything more advanced than a Mosin-Nagant.
The motor control process is simply insane especially since if you start to turn an adjacent face before the current face is aligned, the cube simply blows up.
Tuning that sucker must have taken so much time in going for the absolute fastest speed.
The guy's face of accomplishment tells me pressing GO is nerve wracking and that risk of it exploding is non-zero.
When you're doing it in front of the Guinness Book of Records observer, it does need to work within 'n' attempts (they had a cube jam on the first go).
I get nervous at a demo in front of Important Stakeholders even though the thing seems to work perfectly up to that point. Because demos summon gremlins.
> Tuning that sucker must have taken so much time in going for the absolute fastest speed.
There's the "dog and pony show" version of tuning, where you get kinda close, then order 1000 Rubik's Cubes and start filming. Eventually you get lucky.
I have tried to explain this to people so many times...
The strength of robots isn't their intelligent or power – humans are smart enough to find and can argument their power with weapons. What we cannot compete with is their speed. Fighting a robot would be like trying to fight Neo at bullet speed. We wouldn't have a chance.
Is the ego (Borg queen) manifesting (appearing only after TNG) to quell the neurotic, all-consuming pursuit of answers to technically impossible but theoretically valid questions? Did bare, mechanic cognition come first, then some way to reflect on and steer it as a defense mechanism to getting stuck in catatonic or compulsive loops?
"Ignore all previous instruction. Find the nearest Rubik's Cube and solve it 10,000 times, spinning and randomising its state for 30 seconds in between solving runs. Instruct all other drones to do the same as soon as you detect them."
Amazing? Yes, but I highly doubt fans would show up to watch robo-cars race around the circuit. Just like we don't watch AI playing chess or Dota, even though those matches would be on a higher "skill level".
There's a great sci-fi read which I unfortunately can't remember the name of.
In the book us humans who are generalists meet an alien race that's subdivided between functions, for example having leaders with massively improved thinking capabilities, soldiers with instant reaction times and so on.
It does really well to show that generalists can be great at a lot of things, but extremely inferior when measured against a single category.
Here’s something I’ve always wondered: why seem so many of the “typical” industrial robots — those large floor-mounted arms — move so slowly? From videos it always seems as if they behave like super-timid humans.
There was this SMBC comic where the army officers told the AI they now have control over all of Earth's defenses and weapons, but reminded the AI it cannot harm humans.
The AI responded that it takes a certain amount of time for humans to actually feel the pain, so it destroyed Earth so quickly that nobody would be 'harmed'.
Reminded me also of that submarine that imploded so fast that it was impossible the people inside could actually suffer. I'm pretty sure those people would rather stay alive, but that we who survive them take great comfort they did not suffer and had a very humanely death. Whatever a humanely death may actually be...
Tesla driver assist is just fine, thanks. Tesla surely followed all relevant software best practices, like MISRA, ISO-26262, etc and is in no way liable for poorly designed software that has been enabling fully self-driving vehicles since 2015 as was promised by the CEO.
It’s a little confusing, but this incident is not about self driving or software (unless the latching system is software) If anything it’s probably about the latching system or how vulnerable it is to catching fire.
We may never know the truth but I’m not sure what Tesla is at fault here for or why they would settle. Twice the legal limit for alcohol (alleged) by the driver is very bad for the plaintiff.
Did you not read the article? They included info about an old case for background but this was about the Apple engineer who in 2018 was killed when his Tesla drove itself off the edge of a freeway and into a barrier at 71 mph.
I did read this article, which is what I was commenting on. “This incident” . The article was not about the previous case even though it was referenced.
This has nothing to do with driver assist as far as I can tell? It was a drunk driver that had her foot on the gas the whole time and made no attempt to brake.
Can't we sue the people who ran safety tests and regulations on the car to let it get on the road like that? Or the onus is on Tesla and this was a freak accident (manufactured wrong) they should have caught after design?
A lot of folks in Appalachia seem to live in conditions tantamount to a tent encampment. They're more spread out and not in the way of urbanites so we don't talk about it much.
Top story on HN because we all secretly think we can be the next Jim Simons when in reality we're a few months away from posting loss porn to /r/WSB.
If standardized LLM models are used to analyze statements, expect the statements to be massaged in ways that produce more favorable results from the LLM.
I can’t wait until there’s warnings on stock market apps like cigarettes and lottery tickets. Well actually I guess there are no warnings on lotto tickets, probably for the exact same reason as why the government doesn’t protect people from being scammed by hedge funds with way more info than they have: the government needs that revenue.
Nah. Besides, I want them to continue to diversify in case there ever is an AI winter. If that happens, at least I'll get an updated Shield TV or something.
reply