To err is human

caseyjohnellis
2 min readApr 6, 2020

A question on Twitter today got me thinking about Kerckhoffs’s principles, a favorite of mine when it comes to the intersection of security and design thinking (i.e. the point at which the attacker, the user, and the system interface).

A cryptosystem should be secure even if everything about the system, except the key, is public knowledge.

The question that sparked this was “What is one piece of advice you’d give to people in security?” to which this was my answer:

What does this have to do with Kerckhoff? A lot actually.

Like cryptography, applications and systems run on top of mathematic systems that have very little true variability in their performance of the intended task. The humans who created the task, however, are a different story… I’ve been writing this bit for five minutes and already hit backspace more times than I care to admit, which is an illustration of the point.

Humans, while creativity unparalleled, are also error-prone. When those errors are overlaid onto a system that is designed to do precisely what it’s told, they create bugs and sometimes vulnerabilities.

I firmly believe that a decent chunk of the state of security is a product of humans (the contributors themselves, management, the organization, the market) refusing to acknowledge, for whatever reason, that humans aren’t machines. We recognize the celebrate and good fingerprints of creativity, bear with the bad, and hope that ignoring the existence of the ugly will make it go away.

This is where Kerchkoff’s principle comes in, more strongly put by Claude Shannon in Shannon’s Maxim:

The enemy knows the system.

Shannon and Kerckhoff were pioneers of disclosure thinking — They understand the concept of “build it like it’s broken”. This was especially true in WWII cryptography, but it’s becoming increasingly clear in it’s relevance to the “peacetime” software that we use today.

Bruce Schneier summed it up well:

Kerckhoffs’s principle applies beyond codes and ciphers to security systems in general: every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness — and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.

If your team understands that mistakes are expected, and grace is applied as long as they are expected, surfaced, dealt with, and the learnings used to minimize repeats of the same failure, you’ll be less brittle.

(The reductivist in me wants to extend the idea of secret-induce brittleness, and their failure creating a security risk when none previously existed, to all sorts of systems — including social and political — but we can come back to that another time).

--

--

caseyjohnellis

founder/chairman/cto @bugcrowd and co-founder of @disclose_io. troubleshooter and troublemaker. 0xEFC513EA