The supply chain and the elephant in the room

A few days ago, in the wake of ransomware attacks “related” to the Kaseya remote IT management product, I posted on LinkedIn a short post in which I said the following:

Supply chain is the elephant in the room and we need to talk more about it.

Yes, let’s talk a little bit about prevention and leave detection and management for another time. As the saying goes, better safe than sorry. To develop it a bit further, I added that:

 

we should start thinking that third-party software and hardware are insecure by default and that an obligation should be imposed on software manufacturers to perform and publish, to some extent, serious, regular, in-depth pentesting for the critical applications they sell (and their updates). And even then, any third-party software or device should be considered insecure by default, unless proven otherwise.

 

In a comment, Andrew (David) Worley referred to SOC 2 reports, which should be able to minimally prevent these kinds of “problems”, and commented on a couple of initiatives I was unaware of: the Software Bill of Materials (SBoMs) and the Digital Bill of Materials (DBoMs).

I promise to talk about it in another post, but for now let’s move on.

On SOC 2 and other assessments

I am familiar with the SOC 2 reports and other similar assessment reports, but in my humble opinion, unless

  • it is paid for by a third party (a client, a potential partner),
  • I have done it myself or a colleague I know well, or
  • I am provided with the full list of evidence,

I have some reservations about those kinds of evaluations. And the same for any other general control assessment such as ISO 27002, in which I have quite a bit of experience.

First of all, I’ve always found it a bit troubling that an organization pays for its assessment reports when they are meant to serve as assurance for a third party.

And in this regard, this excerpt from Ross Anderson’s fabulous paper, Why Information Security is Hard – An Economic Perspective is very relevant:

 

For all its faults, the Orange Book had the virtue that evaluations were carried out by the party who relied on them – the government. The European equivalent, ITSEC, introduced a pernicious innovation – that the evaluation was not paid for by the government but by the vendor seeking an evaluation on its product. This got carried over into the Common Criteria. This change in the rules provided the critical perverse incentive. It motivated the vendor to shop around for the evaluation contractor who would give his product the easiest ride, whether by asking fewer questions, charging less money, taking the least time, or all of the above. To be fair, the potential for this was realized, and schemes were set up whereby contractors could obtain approval as a CLEF (commercial licensed evaluation facility). The threat that a CLEF might have its license withdrawn was supposed to offset the commercial pressures to cut corners.

But in none of the half-dozen or so disputed cases I’ve been involved in has the Common Criteria approach proved satisfactory. Some examples are documented in my book, Security Engineering. The failure modes appear to involve fairly straightforward pandering to customers’ wishes, even (indeed especially) where these were in conflict with the interests of the users for whom the evaluation was supposedly being prepared.

 

Secondly, I have my reservations about such assessments because, by their nature or because to be cost-effective, they remain on the surface, and let me explain. An ISO 27002 assessment or a SOC 2 report will verify that the organization has a vulnerability management process in place, that it conducts regular audit assessments that are properly managed, or that it has IAM controls, among many other measures.

And, given the state of information security in many organizations, that’s a huge starting point. But those assessments won’t see that there are a handful of vulnerabilities that have been waiting in the queue for months or that there are a dozen generic users with no one taking responsibility for them.

And that’s a big part of the information security problem. Again, Ross Anderson’s paper comes to mind:

 

So information warfare looks rather like air warfare looked in the 1920s and 1930s. Attack is simply easier than defense. Defending a modern information system could also be likened to defending a large, thinly-populated territory like the nineteenth century Wild West: the men in black hats can strike anywhere, while the men in white hats have to defend everywhere.

 

In a nutshell. There are too many users, applications, systems, laptops, network segments, vulnerabilities, communication patterns, firewall rules, updates,… There is too much of everything, too many things to control. As we know, the devil is in the details, and that sums up the problem we face in this industry.

However, I know that the SOC 2 reports and ISO 27002 are useful and to some extent (and it is imperative that everyone knows that point so that no one is floating around the ocean on a sheet of ice thinking they are on dry land), they are a good way to gauge the state of cybersecurity.

The pentesting proposal

But let’s go back to the pentesting idea I mentioned at the beginning of the post. While it’s an idea that obviously requires some development (even though it’s fairly intuitive that installing patches without validating is not the safest thing to do), I do firmly believe that:

  1. Software manufacturers should provide much more information about the information security controls they have in place to ensure the security of their products (and the results of those controls, always in a context of confidentiality) and, especially,
  2. customers using that software should perform thorough security testing and monitor that software before deploying it. In short, when it comes to critical software like that which was targeted by SolarWinds or Kaseya (and when I say critical, I don’t mean software that manages critical processes or information, but software that has a high level of access to data and infrastructure), the controls should be much tighter than they are and much finer-grained.

In fact, I would advocate detailed and regular pentesting at the manufacturer, but mainly (since I am assuming that the manufacturer’s software cannot be trusted), on the client side:

  1. a continuous automated pentesting of the software,
  2. the monitoring of the application in a non-production environment for a reasonable period of time to identify any changes in behavior, primarily in its communication patterns, and
  3. an exhaustive manual pentesting for each new update or patch, obviously before production release (in fact, I do not recall, although I may be wrong, that software pentesting is a common requirement in change management procedures, as backups are). In other words, without getting into reversing issues, but as close as you can without violating the manufacturer’s intellectual property.

The issue is that that’s a lot of money and not many organizations can afford it.

The risk index model

What is put forward as a possible alternative would be a deep and continuous validation model by independent and unrelated third parties, neither among themselves nor with the manufacturer of the analyzed product, parties that would be in charge not only of the necessary security assessments and tests, but also of coordinating with the manufacturers the resolution of vulnerabilities.

And the work of these analyses would be paid for by the clients of the applications analyzed, whether governments or large companies (and probably by the development companies themselves, as direct beneficiaries in the field of their cybersecurity). The economic model is outside the scope of this post, but I think the basic idea is clear.

Yes, I am aware that we are thus (falsely) outsourcing supply chain risk to third parties, and that creates more problems, so the point would be to not rely on a single organization, but multiple parties, independent of each other, so that there are multiple points of control, which also reduces the possibility of hidden and malicious dynamics being established between manufacturers and auditing organizations.

Developing on this, one idea proposed by Andrew (David) Worley would be to concretice these multiple assessments around a numeric risk index, which would allow a third party to know the security of a software product from this.

Yes, this puts extra pressure on development companies, but the two cases that introduced this post are a small sample of where we’re headed if we don’t curb the insecurity that reigns in today’s supply chain. In any case, one tends to think that software should be distributed free of backdoors, malware or serious vulnerabilities.

With such an index, customers should be able to perform affordable cybersecurity testing of the product/upgrade. I know this is all fantasy with the current maturity of secure software development, but I’m a big fan of Queen’s Bohemian Rhapsody.

In a way, this idea is similar to some checks that browsers and anti-malware apply before downloading files. It’s like:

Let me check this for you, because we can’t trust the source, even if it was sent to you by your mother.

See also in: