Hacker News new | past | comments | ask | show | jobs | submit login

The bill only applies to new models which meet these criteria:

(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.

(2) The artificial intelligence model was trained using a quantity of computing power sufficiently large that it could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024 as assessed using benchmarks commonly used to quantify the general performance of state-of-the-art foundation models.

…and have the following:

“Hazardous capability” means the capability of a covered model to be used to enable any of the following harms in a way that would be significantly more difficult to cause without access to a covered model:

(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.

(B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.

(C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.

(D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.

…In which case the organization creating the model must apply for one of these:

“Limited duty exemption” means an exemption, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.




Pretty much all models, including today's models, already fall foul of the "Hazardous capability" clause. These models can be used to craft persuasive emails or blog posts, analyse code for security problems, and so forth. Whether such a thing is done as part of a process that leads to lots of damage depends on the context, not on the model.

So in practice, only the flops criteria matters. Which means only giant companies with well-funded legal departments, or large states, can build these models, increasing centralization and control, and making full model access a scarce resource worth fighting over.


Really I feel the opposite way, that none of today's models or anything foreseeable meets the hazardous capability criteria. Some may be able to provide automation but I don't see any concrete examples where there's any actual step change in what's possible due to LLMs. The problem is it's all in the interpretation. I imagine some people will think that because a 7B model can give a bullet point list of how to make a bomb (step 1: research explosives) or write a phishing email that sounds like a person wrote it that it's "dangerous". In reality the bar should be a lot higher, like uniquely making something possible that wouldn't otherwise be, with concrete examples of it working or being reasonably likely to work, not just the spectre of targeted emails.

I've been actually thinking there should be a bounty for a real hazardous use of AI identified. The problem would be defining hazardous (which would hopefully itself spur conversation). On one end I imagine trivial "hazards" like what we test models with today (like asking to build a bomb) and on the other it's easy to see there could be a shifting goalposts thing where we keep finding reasons something that technically meets the hazard criteria isn't reall hazardous.


very similar to what the Whitehouse put out [1] in terms of applicability being based on dual use & size. It is hard not to see this as a push for regulatory capture, specifically trying to chill open source development in favor of some well-funded industry closed-source groups which can adhere to these regulations.

A harms-based approach, regardless of the model used, seems more able to be put into practice.

[1] https://www.whitehouse.gov/briefing-room/presidential-action...


I don’t understand how a law can expect someone to foresee and quantify potential future damage. I understand the impetus to hold companies responsible, but that is simply impossible to know.


This sounds entirely reasonable!


640kb should be enough for anyone!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: