Hacker News new | past | comments | ask | show | jobs | submit | pwdisswordfishc's comments login

> that's more of "having no sense of other people's privacy"

Sufficiently advanced incompetence is indistinguishable from malice.


It's not particularly advanced, it's the same thing that means the supermajority of websites have opted for "click here to consent to our 1200 partners processing everything you do on our website" rather than "why do we need 1200 partners anyway?"

It's still bad, don't get be wrong, it's just something I can distinguish.


I think those websites actually have only one partner, one of the tiny oligopoly of advertisement brokers. That partner (*cough*Google*cough*), in turn, bows to the fig leaf of user consent via those interminable dialogs. So the site owners' question should probably be "Why do we need to partner with this behemoth that shackles us to 1200 'partners?".

If it fools billions of people and does significant damage to the lives of people, then it's plenty advanced to me, even if it happens through a more simple or savant-like process than something that looks obviously deliberate.

I don't think the cookies thing is a good example. That's passive incompetence, to avoid the work of changing their business models. Altman actively does more work to erode people's rights.

> It's still bad, don't get be wrong, it's just something I can distinguish.

Can you? Plausible deniability is one of the first things in any malicious actor's playbook. "I meant well…" If there's no way to know, then you can only assess the pattern of behavior.

But realistically, nobody sapient accidentally spends multiple years building elaborate systems for laundering other people's IP, privacy, and likeness, and accidentally continues when they are made aware of the harms and explicitly asked multiple times to stop…


In the USA, you mean.

Incidentally, this dialogue works equally well, if not better, with David Chalmers versus B.F. Skinner, as with the Simpsons characters.


> New function attribute null_terminated_string_arg(PARAM_IDX) for indicating parameters that are expected to be null-terminated strings.

Why is this not a type attribute?


because most of the time the parameter is going to be a plain char*.


They’re asking why it’s not an attribute on the type instead of on the function. I don’t believe your answer explains that unless I’m overlooking some obvious implication of your statement.


For reasons that are too hairy to go into here C doesn't "really" have a string type (yeah, yeah pendants, I know about fixed strings etc.).

It just has arrays of char, which you pinky promise will end in a \0 for things that expect strings.

By declaring that yes, this function really must have that terminating \0 a sufficiently smart compiler can statically analyze some errant use of functions expecting terminated strings. I haven't looked into this new feature, but I assume that's what it's doing.

If you mean why is the "__attribute__" syntax not declaring such a thing adjacent to the function, the answer is that this allows for shoving the . extended syntax into standard C in a mostly backwards compatible way.


No, I’m asking why you can’t apply this variable on types within a function to verify that for example:

    __attribute__((null_terminated)) const char* x = strcpy(maybe_not_null_terminated, “some string”);
Is a warning. Similarly, if it’s a type attribute then you’d apply it on the argument itself instead of needing to specify it on the function + an IDX parameter:

    void do_something(__attribute__((null_terminated)) char* f);
    
The newly introduced strub attribute works this way, so it’s unclear why a function attribute was chosen for this attribute instead of a type attribute.


Oh, I see you want to annotate the parameter in the type position, so the annotation act as some sort of qualifier.


Wow, so the multi-hour package conflict resolution I am going through RIGHT NOW on my 2004 laptop is completely pointless? Good to know.


You'd be better off switching to a non-linux system on that hardware. Ideally something that uses 64bit offsets and time_t.


No, they were always rather poor. It’s you who’s matured enough to notice.


Yes, rather poor but people can always post new answers and votes sort the answers. It might not work all that well but there is a mechanism for improvement and to keep things up to date.

Language models can copy the top answers from SO, ingest docs and specs etc. And then the information is never updated? Or are they going to train it from scratch? On what? Outdated github saved games?


Politicians are tools as well. In more than one sense.


That’s pretty generous.

https://qntm.org/clean


No, you don’t. Seven, plus or minus two at best.


I think humans have better general recall whilst lacking any kind of precision. After reading an entire book, I definitely can’t replicate much (if any) of the precise wording of it, but given a reasonably improbable sentence I can probably tell with certainty that it didn’t appear. LLMs are probably much more prone to believing they’ve read things that aren’t there and don’t even pass a basic sanity check, no matter how long the context window.


Only as inevitable as the dearth of interpolation/parametrized query primitives… though whether the industry has actually learnt the bitter lessons of SQL injection remains to be seen. I don’t hold my hopes up too much.


You can just bypass the injection risk entirely by hardcoding the values as this example demonstrates:

https://news.ycombinator.com/item?id=40246089

(I'm being sarcastic, obviously. You are 100% right)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: