Hacker News new | past | comments | ask | show | jobs | submit login
Cybersecurity Is Broken (crankysec.com)
91 points by obscurette 52 days ago | hide | past | favorite | 79 comments



Back in my aerospace days I worked on an obscure secure operating system, which, unfortunately, was built for the PDP-11 just as the PDP-11 neared end of life. This was when NSA was getting interested in computer security. NSA tried applying the same criteria to computer security they applied to safes and filing cabinets for classified documents. A red team tried to break in. If they succeeded, the vendor got a list of the problems found, and one more chance for an evaluation. On the second time around, if a break in succeeded, the product was rejected.

Vendors screamed. Loudly. Loudly enough that the evaluation process was moved out of NSA and weakened. It was outsourced to approved commercial labs, and the vendor could keep trying over and over until they passed the test, or wore down the red team. Standards were weakened. There were vendor demand that the highest security levels (including verification down to the hardware level) not even be listed, because they made vendors look bad.

A few systems did pass the NSA tests, but they were obscure and mostly from minor vendors. Honeywell and Prime managed to get systems approved. (It was, for a long time, a joke that the Pentagon's MULTICS system had the budgets of all three services, isolated well enough that they couldn't see each other's budget, but the office of the Secretary of Defense could see all of them.)

What really killed this was that in 1980, DoD was the dominant buyer of computers, and by 1990, the industry was way beyond that.


And still, despite the weakening, hardly anybody passed even the watered down requirements. Large vendors like Microsoft complained that the bidding process was unfair because they were not even allowed to compete just because they could not meet the minimum security requirements. So, the requirements were reduced until the requirements could meet the the abilities of the vendors.

For operating systems in the early 2000s, this constituted a Common Criteria EAL4 certification according to the Controlled Access Protection Profile (CAPP) [1] which is only appropriate for: "an assumed non-hostile and well-managed user community requiring protection against threats of inadvertent or casual attempts to breach the system security". EAL4 certifications have since been viewed as too onerous for vendors so they progressively dropped it to 2 stacked EAL2, since clearly 2 * EAL2 = EAL4 (I am only half-joking). To where we are now where the requirement is only the lowest level of certification, EAL1, which does not even demand a security analysis. The vendor is only required to Google: Name + Vulnerability (I am not joking this time [2][3]) and show that any vulnerabilities that showed up were patched.

And people wonder why everything is easily hacked. Should be pretty obvious once you see the standards we hold them to.

[1] https://www.commoncriteriaportal.org/files/ppfiles/pp_os_ca_... Page 9

[2] https://www.niap-ccevs.org/MMO/Product/st_vid11349-vr.pdf Page 20 to see the searches used to validate iOS

[3] https://download.microsoft.com/download/6/9/1/69101f35-1373-... Page 14 to see the searches used to validate Windows


Tbh, Common Critera is basically security theatre. I've went thru the process and it's very checkbox driven and not truly design driven.

There is a mutual issue of both Procurement being an onerous shitshow and vendors being lazy about validating and ensuring security.

I have some thoughts about this but that would basically be a book (or an angry presentation at RSAC, Black Hat, DefCon, and Gartner Federal)

Some of the federal PoCs I've been a part of recentlyish (past decade) have returned to the red-teaming methodology that OP mentioned, but it's very Agency dependent.


Were you involved in a Common Criteria certification at EAL5 or higher? Anything below EAL5 is just paperwork.

As stated above, EAL4 is explicitly only intended to certify a system protects against casual and inadvertent attacks. Checkbox security is largely sufficient to meet that standard.

In contrast, it takes actual effort to certify at EAL5 or higher which is why Microsoft and Apple have consistently failed every time they have attempted to do so. Even EAL5 is just the baby steps and at best only comparable to mid-level security like what Multics achieved in the 80s.

You need to target something like the Separation Kernel Protection Profile (SKPP) if you want serious security. That demanded formal specifications, formal proofs of security, and a multi-month NSA penetration test that must fail to find any vulnerabilities. That was the certification process used for the operating system used in the F-22 and F-35, Integrity-178B.


EAL5+ becomes very very very niche and hardware driven.

EAL5+ only really kicks in for cryptographic systems (think SmartCards and whatnot) or very critical RTOSes developed by Green Hill.

The RoI on formal verification just isn't there, and imo detracts from a lot more pressing and basic security concerns that can and need to he resolved.

Something like SKPP just doesn't make sense outside of certain niche usecases as you mentioned. Like, you can have a nice formally verified kernel, yet you still need to ingress and egress data. You might retort that you can encrypt, and then I can retort that I can MTIM, .....

The cycle goes on and on and doesn't resolve the core fact - nothing can be 100% secure. And Formal Verification, while a useful tool, cannot solve all security issues.

And being friends with several 8200 and similar program alums, trust me when I say formal verification wouldn't solve some of the attacks or vectors they would use.

David Micken's article in USENIX about realism in security a couple years ago is a good overview [0]

[0] - https://scholar.harvard.edu/files/mickens/files/thisworldofo...


Okay, so you were not involved in a EAL5 or higher software certification. You did a checkbox-level certification and think that proves the harder certifications are all useless. Sure, if the harder certification was useless then you would have a point. If the Navy SEAL test is easy, bootcamp is easy. That logic does not go the other way. Just because you can pass bootcamp does not mean you can be a Navy SEAL.

As to your point about ROI, well duh, we are literally in a post about how cybersecurity is broken because nobody cares about cybersecurity. Of course the ROI is poor if nobody cares. I also never even said that formal verification is a goal. I said that the SKPP is a means of demonstrating a product has serious security as it demands a very stringent process that, as far as we know, is theoretically and empirically verified to protect against highly capable state actors. You could theorize some other means of doing so and then verify those means are effective by empirically testing it against highly capable red teams, but you can not claim those techniques are effective prior to the tests, and the SKPP is a known effective certification mechanism for the level of security needed against state actors.

As to the fact that there is more work to be done than just a secure foundation. Again, another duh. But again, should we rely on processes that have failed to produce secure systems for decades, or should be rely on processes that have demonstrably achieved a level of security that most people believe to be impossible. All the high security processes have ever done is succeed at a small scale. That is clearly inferior to the regular processes that continuously fail at immense scale. I mean, everybody knows how to scale and hardly anybody knows how to make things secure, but clearly scaling is the hard part in this equation.

And frankly, this is all besides the point. We need highly secure systems at scale or we must disconnect safety-critical systems. Saying that we do not know how to be secure at scale does not mean it is magically okay to be insecure at scale. Engineering is about minimum requirements, you do not get to just muddle through and harm people because you want to do things beyond your capacity.


> We need highly secure systems at scale or we must disconnect safety-critical systems

I agree, and safety-critical systems are getting disconnected now, or are in the process of being disconnected.

The biggest issue is disconnected your Shadow IT environment - basically those systems and environments running without the knowledge of your security, development, or platform teams.

Most attacks we've seen in Utilties and Healthcare environments have directly occured on those kinds of systems.

A Formally Verified OS is helpful, but would not solve this kind of an Asset Management and Inventory problem

> ROI is poor if nobody cares

Even if I'm Google scale in budget, I need to prioritize vuln patching, compliance needs, misconfiguration prevention, architectural designs, etc.

On the scale of things, a kernel based attack is relatively lower on the scale of actionable vulnerabilities.

> should we rely on processes that have failed to produce secure systems for decades, or should be rely on processes that have demonstrably achieved a level of security that most people believe to be impossible

Let's say you are running formally verified OS that everyone is using. Maybe you've airgapped this system, yet there is a memory based attack that was recently published in the NVD. A formally verified OS would not have solved it.

There is a reason those IBM Mainframes wand Green Hills RTOSes with EAL5+ certification are airgapped and access locked down.

Runtime level attacks make up maybe 40% of all vectors of attack - a large but not singular cause.


And yet afaik no Linux is certified higher than eal4+ and no-one buys them anyway.

Efforts are better placed at isolation and hardening of a standard distro.


> Efforts are better placed at isolation and hardening of a standard distro

Amen! I love the work Chainguard is doing to push that philosophy btw.

> no-one buys them anyway

Well, there is a very niche market for this at a very specific federal agency (not the one you think) but IBM Mainframes and Green Hills are more than enough to meet that market need.


I have no idea what you are even trying to say. In this thread we are talking about how cybersecurity is broken and how certification requirements have been continuously degraded to allow insecure systems to be deployed in inappropriate contexts. This has removed one of the vital incentives for secure systems, having requirements that actually demand and verify security instead of just caving into vendor incompetence. Saying that people use systems known to have poor security because they are not required to use systems with high security supports my point.

It is not even like they were asking for the impossible. High security systems designed and verified to protect against state actors were developed and deployed. Just everybody demanded they be allowed to use cheap garbage because they value their budgets over your security.

Incidentally, we also learned from this process that it is basically impossible to retrofit security onto a insecure design. Any system with a insecure design must be thrown out and redesigned from scratch. Numerous attempts over multiple decades and billions of dollars were spent trying to retrofit security onto existing designs and literally every single one of them failed to this day. Attempting to "harden" existing systems to achieve meaningful security is just a fools errand when there are plenty of examples showing you can develop a secure, general purpose base from scratch, and no examples of anybody ever successfully hardening a existing system.


> I have no idea what you are even trying to say

The person is (correctly) pointing out that Engineering/Technology is only 50% of the cause of a breach.

Security breaches are equal part technical issues (eg. Bugs, misconfigurations) and process failures (eg. requiring multiple VPs giving the go-ahead on upgrading your Artifcatory server).

Purely technical solutions will not stop breaches.

The belief that bug free formally verified code for something as complex as a fully functional OS (not an embedded or RTOS) can be developed and scaled out is unrealistic and wouldn't solve plenty of much easier vectors of attack.

I'll let Trail of Bits explain for me (thank goodness they recently published a blogpost about this very topic) [0]

[0] - https://blog.trailofbits.com/2024/03/22/why-fuzzing-over-for...


I have no idea why you think that explains anything. It is just saying that fuzzing is a inferior substitute for formal methods, but much cheaper and thus a more cost-effective option. That is totally divorced from literally any point I have ever made anywhere.

Formal verification is a known mechanism for establishing highly secure systems. It is not the only way. But it is a way that is known to work. A certification process that demands formal verification and practical penetration testing by the NSA is, as far as we know, a highly effective means of identifying a high security system designed to protect against state actors. It may generate false negatives, but it is highly unlikely to generate false positives.

In contrast, extrapolation from processes used by systems that are known to be insecure and security procedures that have never once produced systems that can do things like withstand a practical penetration test by the NSA are highly speculative at best. You must not only do them, you must then generate empirical evidence of security against highly sophisticated and well-funded adversaries, such as practical penetration tests by highly competent red teams, before you can determine if your theories are correct. Anybody who has never achieved success against such adversaries has no business dictating forward looking strategy.

As to your assertion that a highly secure fully functional OS can not be developed and scaled out. Then you must either believe that it is okay to connect nuclear power plants and medical devices to the internet with insecure systems or that we must not connect and disconnect all safety-critical systems from the internet. We can not have it both ways. We must either solve the problem technologically or socially, the alternative is catastrophe. You can advocate unproven techniques that have failed for everybody who has ever tried them, but I prefer to advocate using techniques that are empirically known to work and building from there up to more security and down to easier implementation.


Excluding formal verification, most of the stuff you mentioned has already been mandated and pushed to implement for at least 7-8 years at this point.

The issue you still have straddlers who hasn't finished implementing these best practices due to organizational or financial issues.

I'm just going to stop responding at this point because I don't think you've ever actually had to chat with the SecOps teams of a Healthcare Group or a Utility and I seriously have doubts you've ever actually worked in the security space

And I say this as someone who has played around with INTEGRITY - most half decent EECS departments have a license for it and faculty who have worked in the space.

You seem fixated by runtime and kernel attacks, which only make up a small portion of actual attacks

> Then you must either believe that it is okay to connect nuclear power plants and medical devices to the internet with insecure systems or that we must not connect and disconnect all safety-critical systems from the internet

Wow. You must be the first guy who has thought about this. Give this guy a Turing Award. I don't think organizations whose infra falls under the remit of safety-critical hasn't been in the middle of a 10 year process of trying to airgap or decouple environments. Or maybe, just maybe, there are tens of billions if not trillions of embedded computing devices and shadow IT systems that are in various differing levels of compliance, and organizations might not even realize are even running. And maybe most industries that aren't software and finance run on single digit profit margins and have very limited cash on hand to manage a migration. And maybe an EMR is different from VE 11


Oh please. This entire comment chain is about how they do not demand highly certified systems. Literally the only substantial thing I mentioned is the thing they have not mandated or pushed to implement. The only secondary thing I mentioned is that they should use systems that are empirically determined to protect against well-funded attackers as that is the actual goal. I did not supply a number to that, but I usually use 10 M$ as a bare minimum since we have seen numerous financially motivated attacks coming in around that number already. So, a 20 person red-team with 1 year full-time must not be able to find any vulnerabilities in your system, then you meet that bar. Again, nobody does that either. So, the only two things that I said are not being done.

Then you keep coming back to how their security teams need to focus on easier wins. Yes, they do, but even if they do them all they have still failed to reach the minimum bar. The goal is not "better" it is "adequate". Following all modern best practices and doing everything correctly still leaves you vulnerable to the now commonplace attacks by financially motivated professional criminals. Being unable to protect against the adversaries who are literally attacking you is the definition of inadequate.

Technology is the limiting factor for adequate security deployments. No matter how good your processes are you are still screwed against competent adversaries. That is not to say that technology is the limiting factor for most companies. Most companies also have bad processes. No matter what technology they are given they will screw it up. But even if you do everything (within reason) right and achieve the maximum potential of what is achievable you still have a inadequate system. That is a untenable state of affairs.

As to your random personal attacks. I design and develop actual high security systems in defense applications. People do frequently accuse me of not knowing what I am talking about because what I view as standard processes are what they view as impossible dreams. I do try to tamp it down to be generally applicable even to low security systems.


^^^ this guy is an absolute legend (and his work was the bane of my existence as an Engineer, which made me switch to the business side /s).

This the guy who created the Nagles Algorithm for TCP optimization.

https://en.m.wikipedia.org/wiki/Nagle%27s_algorithm


> You see, cybersecurity is broken because of the lack of consequences. It's really that simple.

To put a slightly more explicit phrasing around the blog's message: Consequences fall on the wrong people. The ones screwing up chasing profit are not the ones feeling the pain.

The damage falls on the innocent people the companies were trying to use as resources.

This can be broadly classed as an economic externality, much like how a company can make money dumping poison into the lake but the people who suffer are the ones who drink from it.


Data hoarding unfortunately has a massive social positive externality, at least in the perception of many government agencies.

Law enforcement love that gmail addresses can generally be tied back to real people. Tax agencies love that most financial transactions are recorded for several years.

Anyone who reads this comment probably believes these positive externalities are over merited, but it’s formidably difficult to debate the case to a politician.


Consequences are happening.

People just don't see them because this happens well above the IC pay grade and takes some time to percolate down and no one wants to publicly announce you shitcanned 5-10 people in middle management and security leadership because you enter thorny employee litigation territory.

That said, I agree with the author about mismatched expectations, though I can safely say that $500k year is VERY HIGH for a CISO. I know CISOs for publicly listed F500s who earn around 200-300k at most after 15-20 YoE.

The bigger issue is CISOs, VP Security, and Security ICs are not enabled institutionally.

If I'm honest, most security engineers suck. 90% are crappy IT Admins or Compliance Monkeys who did CISSP and maybe worked for PWC or an MDR for 1-2 years and don't know the difference between NFTables and NTFS. Most CISOs and VP Sec are former security engineers in turn.

Security Engineering NEEDS Security Minded Engineers. Now that development teams own Platform Management and Deployment, they should also be enabled to own Security, and a Security Team of 10x Engineers with a Security background should help with implementation and guidance internally. At least this is the model I've seen in tech forward public companies (some of whom HNers wouldn't even realize are tech first).

I also agree with Scarlett that data protection laws are critical and need to be enforced. That said, it's not enough (wouldn't protect against a vulnerability disclosure or misconfigured ACLs), and several Security ICs I trust recognize that as well. That said, the tone of the author and a couple well intentioned security minded engineers can impact this larger effort. You trap more flies with honey as they say.

A security minded engineer cannot present this kind of an article to their non-technical leadership, as it opens multiple questions about liability, ownership, and potential incompetence.

> Absolutely no amount of gentle pleas disguised as executive orders from the White House urging people to use memory-safe languages will solve the problem. CISA, despite all the phenomenal work they do, can't charge people who mishandle data with negligence; critical infrastructure involved or not

Amen to that! There's a reason why pushing for cybersecurity insurance might be a good push - hurting the bottom line is a good forcing function for change.

----------

Also, Engineers need to stop being shitty to QA and Platform Engineers.

Treat QA, IT, and DevOps as a first class citizen.

I don't give a rat's ass that you like using Mosh or xyz project on GitHub (not trying to pick on Mosh).

I don't care that you feel restricted by having to use MacOS laptops and SSHing into a CoLo protected behind ZPA when you'd rather use ArchLinux on your work laptop.

Every bug, misconfiguration, or non-standard platform deployment needs to be treated as a potential security liability.

Sure it might slow down the deployment of your "yet another wrapper around an LLM SaaS" but there absolutely needs to validation.


>If I'm honest, most security engineers suck. 90% are crappy IT Admins or Compliance Monkeys who did CISSP and maybe worked for PWC or an MDR for 1-2 years and don't know the difference between NFTables and NTFS

As a security engineer, I agree. I hang up and work with really skilled people, so sometimes I'm shocked when I work with a client's it security engineer and they barely know how to use a terminal. Sometimes don't even have a way (or skill) to use SSH. Not to mention that I code/script every day, and most "standard" big company security engineers just use ready made tools.

Sorry for the rant.

>I don't give a rat's ass that you like using Mosh or xyz project on GitHub (not trying to pick on Mosh). >I don't care that you feel restricted by having to use MacOS laptops and SSHing into a CoLo protected behind ZPA when you'd rather use ArchLinux on your work laptop.

I somehow agree with your examples, but not sure if I agree with the overall idea behind your messages (as written). People have different workflows, and forcing everyone to the same mediocre one will just hurt productivity. Of course there need to be standards, but if people feel restricted by having jump through hoops on unfamiliar operating systems and spend a lot of time and frustration fighting them... Then they're probably right. You should listen and give way more than rat's ass to engineers problems.


> if people feel restricted by having jump through hoops on unfamiliar operating systems and spend a lot of time and frustration fighting them... Then they're probably right

I completely agree with you!

I think all us people in the cybersecurity space are cranky ;)

But it also brings up a good point. I feel bad quality User Experience is a critical cause for bugs and misconfigurations. And UX isn't just "look pretty" - it's about optimized and simplified workflows.

> most "standard" big company security engineers just use ready made tools

I've worked for vendors and have funded vendors, so I might be biased, but ready-made tools can be helpful.

The issue is if you are using tools without understanding the underlying architecture or design of your platform.

If you're just a script-monkey and only concentrating on the what, security automation is going to take your job away (and is already in the pipeline in the IR world as we speak)

> Sorry for the rant

No worries. You yourself replied to my rant XD


though I can safely say that $500k year is VERY HIGH for a CISO. I know CISOs for publicly listed F500s who earn around 200-300k at most after 15-20 YoE.

If I'm honest, most security engineers suck. 90% are crappy IT Admins or Compliance Monkeys who did CISSP and maybe worked for PWC or an MDR for 1-2 years and don't know the difference between NFTables and NTFS. Most CISOs and VP Sec are former security engineers in turn.

You get what you pay for. Dinosaur companies still can’t stomach paying “nerds” more than their Assistant Regional Managers and it shows. Maybe they’ll be extinct soon.


> Dinosaur companies

I'm curious what are the dinosaur companies in your mind.

Some "dinosaurs" are actually extremely technical and on top of their shit, and some companies HNers/Redditors/LWers love for their supposed engineering chops have their pants off.

Also the number I gave was for a sexy publicly listed tech company that a lot of HNers try to Leetcode grind to.

> more than their Assistant Regional Managers

In most companies outside of the High Tech and Finance sector, margins are extremely low and even a GM will earn at most $150k and maybe $20-30k stock, despite owning a 9 figure BU.


Come off of it. This guy: https://en.m.wikipedia.org/wiki/Phil_Venables_(computer_scie...

isn’t making only $500k a year in total comp. Nor is an assistant regional manager at say, Target, making only $180k/year.

Quit making things up.


> I don't care that you feel restricted

Yeah well, this is why we don't like security engineers. You absolutely should care that the policies you push for are making workers feel restricted.

For your job to even exist, engineers must be able to produce just remember that.


There is a middle ground between keeping users (in this case Engineering) happy and an environment secure.

Ideally, Security Ownership should be taken up by the Application/Dev team with an open understanding that heads roll if you messed up ("ownership"), and a security team and platform team exists to help consult and implement security and deployment.

I guess they call philosophy "DevSecOps" or "Shift-Left" in the Gartner world.

That said, a lot of "security" practices are pure BS and security theatre.


Security, features, and speed of development all tend to be at odds with each other. And the major problem is: security doesn't gain money. The others do. Security is pure risk management.

So it will almost always get the backburner. Hell even if there are financial consequences: those don't matter if you don't really make money in the first place.


Remember that for an engineers job to continue to exist there is a need to ensure that the product and systems are secure.


Looks like you disagree with TFA.


Nassim Taleb was right again.


The ideal data protection law would prevent most of the data from being collected in the first place. Cybersecurity, on the other hand, is about protecting what you have collected anyway. So, maybe cybersecurity is broken, but fixing privacy is a great first step.


This is a great summary of the economic problems perpetuating lax cybersecurity and the real political reasons we continue to suffer. The answer is clear, and there is precedent in other similar fields: we need data protection laws with teeth.


This, if done right, would also reduce surveillance capitalism by turning huge troves of personal data into liabilities rather than assets.


I wonder if any kind of pro-security legalisation were proposed, how many lobbying firms from big tech (Google, Amazon, etc) would fight it tooth and nail.

Kind of reminds me of PHK's criticisms of HTTP2, tho (https://queue.acm.org/detail.cfm?id=2716278 ), where he makes this point;

"The reason HTTP/2.0 does not improve privacy is that the big corporate backers have built their business model on top of the lack of privacy"


> any kind of pro-security legalisation were proposed

Most of these kinds of policies are done in coordination with companies. Google and Facebook/Meta are actually massive laggards on the lobbying side.

It's companies like Cisco, Microsoft, PANW, ZScaler, Crowdstrike, etc along with some up and coming startups that partake in this. Some of the stuff they propose is good, others is crap.

That said, no one's an idiot. Most of these kinds of legislations and proposals are a direct result of coordination and cooperation between defense buyers, vendors, and ICs.


> Why the fuck do you need my home address just so I can copy and paste some GIFs? Because you want to sell this data to data brokers, and you know there will be absolutely no negative consequences if you mishandle this data

One might argue that selling or giving away (or even internally abusing) customer data is every bit as bad having it stolen. As far as I’m concerned, selling my address is a data breach and should be treated as such.

(Obviously, as the article notes, data breaches aren’t really taken seriously.)


It'd be interesting if you basically made it illegal to both process and store user-data. If you want to process a user's information you need to go through that user's storage API and then you need to persist your data back through that API. Since everything is co-located in the cloud I don't think latency would be a huge deal. Users would get a choice of storage vendors - total visibility into who and what is doing the reading/writing and can delete/revoke access at any time.


it doesn’t help all governments sponsor and partake in the 0-day trade which undermines efforts of their citizens private sector blue teams. In addition to paying ethical hackers sometimes 1-2% of what they pay 0-day brokers for the same vulnerability.

It’s definitely broken, and as long as the same entities demanding “improved cybersecurity” from its citizens also continue to undermine their efforts nothing will change.

It’s to wrapped up in the military industrial complex, no one’s trying to fix and stop wars when there’s money to be made.


> ransomware attacks that are only profitable because some people just decided that an inefficient distributed database was worth some money

What database tech is this referring to? I’m guessing bitcoin, but the phrasing wasn’t clear to me that it meant the payment method rather than an easily exploitable target database everyone was choosing.


If you can't run a program by telling your computer what you want to run, and what resources you trust it with, and know that it will respect those choices, you're never going to have Cybersecurity.

This is a solved problem for other domains where you have resources you want to safely utilize a portion of.... wallets for cash, circuit breakers for electricity, etc.

We don't need legislation, or banning of "C" to chase the latest hemline of Rust.

Long ago we collectively decided that MULTICS was too complex, and we really didn't need all that security. At the time, it was reasonable, but now... not so much.

We keep reinventing it, then deciding our ersatz version of capabilities is too slow, and make it faster, easier, etc... until it's broken security wise, and repeat again, and again.



"Cybersecurity is broken because of the lack of consequences."

If I may after few decades in - add in or change for competence.

Most often my team and I test apps that have been verified by multiple parties and we still find juicy stuff. Not always, but most often.

Here's the kicker - most important ones aren't about the tech, but the business part of it (validations, processes, flow and so on).

If I could make a very generic recommendation for most - check the logic and business first then make sure the tech is decent.

In business - make sure you include people and management.


Partially correct. Cybersecurity is broken because there are no consequences, but cybersecurity is not broken because there is no money in it. Large corporations spend literal mountains of money on cybersecurity, but cybersecurity is broken, so that money is basically wasted. Literally go ask any CISO or cybersecurity director at any Fortune 500 company: "How much would it cost to hire hackers to compromise our the systems of our company with billions of dollars of revenue and take down operations?" Keep asking that until they give you a literal monetary number. I have never heard a number over 1 M$ by anybody who knows anything. None of the big 4 banks, who literally spend hundreds of millions to billions of dollars, gave a number over 100 k$. If they give you a number over 1 M$, ask if you can make a open prize at Defcon so they can prove it, they will be shaking in their little boots.

Cybersecurity technology is, as a rule, useless. And it is also worthless since there have been no meaningful consequences to date. Large companies pay huge piles of moneys so the CEO and Board of Directors can say they spent a lot of money so they, personally, have plausible deniability when their systems get breached. Then the lack of actual business consequences kicks in and everybody is happy after the PR blip passes over. Optics are, in fact, more important than security for large companies which is why heavy spenders look so broken. It does not need to actually work, it just needs to look good to outsiders so they do not get a phantom PR hit (it is a phantom from their perspective since there are no actual business consequences, there may be other consequences but that is outside of their evaluation criteria).

The only actually meaningful and cost-effective "preventative" measure is doing the bare minimum of standard IT practices (i.e. keep things up to date, keep backups, etc.) to prevent amateurs from crippling your systems. Against professionals, no commercial IT solution works, so you are better off just purchasing cybersecurity insurance. You should only waste money on the standard cybersecurity garbage if you need to slough off liability. In every other way it is just plain useless; it provides no meaningful increase in security and costs a ton to boot. This is why small companies look so broken, nothing works and they do not need the optics, so there is little point in spending money on things that do not work.

With the recent wave of mature, professional cybercriminals we are finally starting to see a little bit of a shift. The 18 year old hackers who thought 100 $ was a lot of money are now in their 30s running professional extortion companies. They are starting to ask for serious money and the consequences are starting to materialize. Unfortunately, we have an entire industry of snake oil and the rest of the economy is not ready for the consequences. It is already hitting the cybersecurity insurance companies who are rapidly going underwater because policies are backwards looking. The cyberattack industry is growing like 300% per year, so the premiums from 5 years ago, which assumed a expected value 243x lower, make no sense today, and the premiums today make no sense next year. Incidentally, this is why you should purchase as much long-term cybersecurity insurance as you can, it is massively underpriced given current trends (e.g. Maersk got a real steal with their 1 G$ payout which is probably more than the total premiums paid to all cybersecurity insurance companies put together over their entire existence at that time).

The problem is not money. It is working solutions. Money helps make working solutions as long as it goes to people making working solutions. But, so far, optics have been preferred over security due to the lack of consequences.

If we want working solutions, then we need systems verified to protect against the now commonplace attacks by professional attackers at the 10 M$+ range. As a first-order estimate, that is a team of 20 full-time for a year. That is the minimum bar. For large nationals or multinationals, you probably need to be in the 100 M$ to 1G$ range, 60 full-time for 3 years or 600 full-time for 3 years. Only then are we reasonably safe against sophisticated financially-motivated attackers.


I think this is the wrong way to think about personal data. You're better off just living your life like everything is hacked and out there and take precautions to deal with that.

Otherwise you place your destiny in the hands of others. And your also expecting a 100% success rate against data being stolen. We're only human, eventually someone will screw up no matter how much punishment there is.


Nope . It’s broken because all policy is normalised into box ticking and insurance.


cybersecurity can mean many things.

a noun, a verb, a quality, attribute, or function.

In general, I don't see the noun, verb, or function as broken (despite being new(immature) fields relatively) but I definitely see the quality, or attribute as broken because it is subject to the whims of profit and doesn't have many of the guard rails of more mature industries.

The Body of Knowledge is not firmly established therefore there are huge asymmetries between developers, offensive and defensive practicioners, and resourcing/tooling plays a gigantic part of this.


It's time to introduce PE licensing for the title of "software engineer". Like civil engineers, software engineers should be personally, civilly and criminally liable for the systems they sign off on. Reserve other titles, likE "software developer", for those who work under the engineer and do not assume liability.

Other measures, like data protection laws, will still be necessary. But introducing certification and liability like an actual profession would be a good start.

This will greatly diminish startup culture. Fine. I'd rather have a few responsible companies out there playing by the rules than a thousand wildcats for whom rules are an inconvenience.


I’ve had the same thoughts and I’m glad I’m not alone. We like the money and prestige from wearing these titles but not the responsibility that others who call themselves engineers shoulder.

I can protest to my boss about security issues and data privacy or even try refusing to proceed with a project or release but that’s a minor inconvenience to him. Easy enough to fire me and get somebody else who doesn’t care.

We complain that we are powerless but investors and executives aren’t going to give us any power willingly. That will have to come from legislators and if we want it we’ll have to take some responsibility too.


> You do what the payment card industry has been doing for decades

What? Mandate a bunch of paper-thin worthless rules that tie up security & engineering teams and don’t actually add measurable security improvements?

I’d be very interested in seeing the data that shows PCI-DSS has had any impact. I spent a previous life breaking into PCI compliant companies, and it didn’t offer the tiniest speed bump.

This is a horrible recommendation.


While it feels dirty, I blame companies more than PCI.

Just like the failures of traditional Enterprise Architecture, the Prescriptivist, universal top down method eats lots of resources without delivering much value to the company.

While governance and policy are important, most are written to CYA more than to solve the initial problems.

As an example, as a consultant I once found a serious vulnerability with a serialization library on a very large companies stack.

Because it wasn't web facing, it couldn't be prioritized as policy didn't allow increasing its weight.

They were compromised a few quarters later... but it was their policy that they wrote that caused that.

Almost universally, restrictions and barriers have almost nothing to do with PCI requirements, but due to how the company implemented it.

A lot of that is due to the consulting and certification industrial complex and the focus on productized offerings. Just as chatGPT is intentionally verbose because it appears more authoritative, companies adopt governance that is way more detailed than appropriate.

Obviously any compliance will have some detailed and firm requirements, but the core concepts are replaced with blindly implemented checklists when values and principles are what should drive most decisions in specific systems.

The requirements to be PCI compliant often follow that concept far more closely than what companies actually adopt.

PCI DSS isn't that far off from generic best practices, optimization for superficial Self-assessment questionnaires and audits that we know don't catch much over those best practices is the problem.


There is no black and white “this will fix it” in cybersecurity. It is a continual arms race. Arguing “X will stop cybercrime” is so naive it hurts.


"When literally nothing happens when some stupid service gets popped and loses your data they had no business collecting in the first place, this kind of thing will happen over and over and over again."

Money quote, and he's right. In Europe, the GDPR helps stop random data collection, but there is still no penalty for getting hacked and losing customer data. There should be, and in egregious cases upper management should be personally liable for civil suits by affected people.


Well that's clearly false. GDPR fines for data breach (when there is enough negliglence) (or even inproper handling of data breach) can be pretty severe. At least in theory, in practice the enforcement depends on the country and the fines are usually nowhere near the legal limit. But there is a penalty.


If you want to see a functioning dysfunction of cyber security, lookup nei 08-09.


The cybersecurity plan must describe how the licensee will:

Correct exploited vulnerabilities

Lol


it's a profession for negotiating machine based contracts instead of legal ones now. like legal services and compliance it creates its own demand and demands infinite management. its essentially a branch of law.


Fixing cybersecurity with laws is the same as fixing drug trafficking with laws


Bullshit. Liability and regulation can absolutely help with ensuring better practices. The fact that the SEC now requires disclosure of active breaches has forced companies overnight to begin taking cybersecurity seriously, and there are plenty of other liability related changes happening as we speak.


The comparison is valid, the laws work and also they don't. Using a big hammer can make things overall better but for some individuals much much worse.


> You see, cybersecurity is broken because of the lack of consequences. It's really that simple. When literally nothing happens when some stupid service gets popped and loses your data they had no business collecting in the first place, this kind of thing will happen over and over and over again. Why the fuck do you need my home address just so I can copy and paste some GIFs? Because you want to sell this data to data brokers, and you know there will be absolutely no negative consequences if you mishandle this data, fucking over the people who keep your business afloat. So, companies big and small fuck things up and we need to clean up the mess and face the consequences. Sounds about right.

10 years from now: AI somehow knows every single tiny detail about your life and can accurately predict any decision before you even made it. How could it have come to this? Clearly, it's just the fundamental superiority of AI compared to the human intellect. It's just the inevitable march towards the singularity. There is nothing we could have done to prevent this...


Complaining about privacy and data brokers then ... discuss this on our Discord!

¯\_(ツ)_/¯

I guess now, finally, with the introduction of advertising they have a recognisable form of income. Meaning they might be less likely to profit off data.


Author appears broken, as well.


"Memory unsafe languages" is maybe one percent of one percent of the problem.

As always, nobody actually gives a damn about "security" and uses it as a pretext to push something unrelated. (In this case, Current Year's stupid fad programming language.)


Memory safe languages are nearly irrelevant. Last time I looked, freaking injection attacks still held the top place among vulnerabilities.

Even companies that take testing seriously rarely test for security problems. This needs to change.


Injection attacks AFAIK have held the top place since forever.

It is why people keep buying WAF's.


On the other hand, WAFs are another kind of security theater. They won't stop any determined attacker. Usually you just need to change your payload to make it work. Unless you tweak the rules a lot, to the point where you could encode then in your application as well (for example "user_id" field in the POST data must consist only of decimal numbers)


Counterpoint: Just because they won't stop any determined attacker, doesn't mean they don't have value.

Stopping casual attackers is one talking point, but still not the real value. In my opinion, the real value is making you look less like "low-hanging fruit" to automated scans - throwing a bunch of 403's makes you less likely for a follow-up after an automated scan.

I actually have a side project when I get the time to try and prove it statistically using a honeypot. I would bet the overall volume of attacks is lower with WAF enabled, and inversely correlates with the 403's thrown. Just my 2 cents.


Injection attacks holding a top place means the adoption of memory-safe languages is working.


No. Injection attacks have held the top place for literally decades. They are kindergarten level: validate your inputs. Lazy or incompetent developers still fail to do so.


From what I've seen, we're going to rewrite everything in memory safe languages, miss all of the corner case logic and tests, and end up less secure than we were at the beginning.


I'm fascinated by C. I agree that using almost anything else automatically eliminates entire classes of bugs and vulnerabilities, but it's so much _fun_ to be close to the machine and avoid those bugs and vulnerabilities myself. Judging by the fact that even some greenfield projects are still written in C, I'm not alone.


I am as much of a rust shill as you'll ever meet, but I agree that there is something beautiful and alluring and simple and engaging about C that few other languages match. It's basically an advanced macro assembler for an abstract machine, so there's all of the allure of using 6502 or 68000 assembly language but with none of the portability problems, and a vast ecosystem of libraries and amazing books to back it up.


I've enjoyed writing a few projects in x86-64 assembly as well, for what it's worth. Even though I'm sure that any C compiler would generate better assembly than my handwritten one. Flat assembler is great, by the way.


Any C compiler can generate better assembly for a function. But there's often some whole program optimizations that you can make, which the C compiler isn't allowed to do (because of the ABI/linker).

For example, a Forth interpreter can thread its way through "words" (its subroutines) with the top-of-stack element in a dedicated register. This simplifies many of the core words; for example, "DUP" becomes a single x86 push instruction, instead of having to load the value from the stack first. And the "NEXT" snippet which drives the threaded execution can be inlined in every core word. And so on.

You can write a Forth interpreter loop in C (I have), and it can be clever. But a C compiler can't optimize it to the hilt. Of course it may not be necessary, and the actual solution is to design your interpreted language such that it benefits from decades of C compiler optimizations, but nevertheless, there are many things that can be radically streamlined if you sympathize with the hardware platform.


No. Put C out to pasture -- or just take it behind the barn and shoot it. Entire classes of severe bugs Just Go Away when you switch to a memory-safe language. Not all bugs, obviously, but the vast majority of the low-hanging fruit.


Well if I could shake my magic wand and instantly convert all my (and other) C code to some kind of MemorySafeC code, I would. Unfortunately, there is no such magic wand, and all we can do is to rewrite tens of millions of lines of C code to another language with different tradeoffs and different compatibility. It is, in other words, usually not possible.


I mostly agree with this sentiment, but please let's not pretend like we're doing it because we care about "security", and not just because we don't like some of C's 1970's legacy crap.


> "Memory unsafe languages" is maybe one percent of one percent of the problem.

Multiple distinct large scale software projects have found that 60-70% of severe CVEs are due to memory safety violations[1]. The White House has called for projects to use memory safe languages [2]. The Android Project has seen an incredibly substantial drop in security vulnerabilities concurrent with their rapid shift to using memory safe languages in New code, with the correlation being so tight and the number of vulnerabilities having been so constant before that they are forced to conclude that memory safe languages have helped[3]. So your claim that memory unsafe languages are maybe 1% of 1% of the problem is not only completely unsubstantiated, but almost certainly false given all of the available information.

And your jab presumably at Rust for being a "fad" similarly holds no water. It is the only language that has actually offered a practical means of eliminating memory safety violations at compile time, statically, without needing a runtime or garbage collector or to give up zero cost abstractions, meaning that is the only relatively memory safe language with a solid shot at working in the fields where C and C++ were ordinarily used. That really doesn't seem like a fad to me. Or stupid.

As I've said many a time, this sort of denial often seems like the cantankerous lashing out of someone who doesn't want to learn something new and can't be bothered to look past the occasionally superficially annoying antics of a community to see the actual technical merits of the software, and perhaps even can't stand to be confronted with the fact that their hard won knowledge in a needlessly difficult language might eventually be less in demand than it was before, and whose fragile elitist self-mythology about being better than everyone else because they can "write C code without making mistakes" is in danger of collapsing under the weight of evidence that it is a delusion.

[1]: https://alexgaynor.net/2020/may/27/science-on-memory-unsafet... [2]: https://www.whitehouse.gov/oncd/briefing-room/2024/02/26/pre... [3]: https://security.googleblog.com/2022/12/memory-safe-language...


>60-70% of severe CVEs are due to memory safety violations...So your claim that memory unsafe languages are maybe 1% of 1% of the problem is not only completely unsubstantiated, but almost certainly false given all of the available information

Based on your tone and profile you probably aren't interested in better information here, but I'll offer anyway.

The vast majority of cybersecurity attacks, and especially the vast majority of actual incidents, don't involve CVEs. Think recent breaches at Okta, Microsoft, Uber, even SolarWinds a few years ago.

When CVEs do come into play, they're as likely to be logic flaws as anything else. Think Sandworm, Log4Shell, that Apache Struts vulnerability that Equifax didn't patch.

And when memory safety problems do bubble up, well, there are still a bunch of issues that existing memory-safe languages don't actually address. They're real, flawed tools made by real, flawed people, not magic.

So, big picture, memory-safe languages shouldn't be a top-10 priority for very many teams at this point. Maybe someday, though.


> Based on your tone and profile you probably aren't interested in better information here, but I'll offer anyway.

You might be surprised :) I come across as emotional online because I'm genuinely invested in things, not because I'm looking to flame, which means I'm actually far more likely than the average person to change my mind if given good arguments, because I'm actually putting my views on the line. I reacted as harshly as I did to the GP mostly because they didn't actually substantiate anything, just made a broad unsubstantiated claim and followed it up with annoying dismissive rhetoric.

On the other hand, this is useful information, and I appreciate you providing something to make this discussion more interesting. Your input certainly does make the tradeoff picture more complex, and I'm certainly a fan of letting each team decide what moves are best for it, and agree that moving to memory safe languages is a long term thing (or a thing for greenfield projects), not an imminant emergency.

Nevertheless, I still think it's more important than to deserve a dismissive "one percent of one percent, we should keep starting new projects in C." For one thing, CVEs might be more important than you let on, since while yes, most breaches are the result of social engineering and such, not bad code per se, we as tech people can only really control what our tech does, do we should focus on breaches that result from technical faults — and wouldn't that usually be CVEs? And if we're looking at CVEs, it seems like we're just looking at things from different perspectives: I'm looking at how to decrease the volume of critical CVEs, and seeing a handy majority of them are caused by memory safety violations that can be fixed with essentially a tooling change, which seems like a big win to me, whereas you're looking at what CVEs end up being exploited, and saying that since it's about 50/50 logic bugs, memory safety doesn't matter. To me, it seems like the ratio of which ones actually get exploited is probably a bit arbitrary, because which ones get used and which don't probably is, so we should focus on just minimizing how many we produce at all, though. Moreover even at 50/50 or so, it seems worthwhile to deal with memory safety, especially since again dealing with it just requires using tools that stop you from creating those bugs, whereas trying to "solve" programmers creating logic bugs is like cold fusion unless you wanna program in Coq (and even then...). As for there being a bunch of other issues memory safe languages can't fix... sure, but why not deal with what we can deal with? It sort of seems like whataboutism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: