Jacob Kaplan-Moss

What if We Thought About Risk Decisions Differently?

someone doing free-hanging rappel down a waterfall

Would you believe me if I told you that this was safe? That we’ve considered the risks very carefully, and mitigated them? Or would you say that someone who rappels down waterfalls probably isn’t thinking very clearly about risk?

The “people suck at thinking about risk” framing

There’s a common belief among risk professionals that people are inherently bad at thinking about risk. Specifically, that people inherently make poor decisions when confronted with risk of the “low probability, high consequences” variety (e.g. traveling in avalanche terrain, releasing software with known but difficult-to-exploit vulnerabilities, etc.). I’ve certainly said many times, and I think I mostly believe it. But in the last couple years I’ve started to question this foundational narrative, so I want to spend a few minutes thinking through the implications of starting from a different point of view.

The “people suck at thinking about risk” view certain has plenty of weight behind it. Talk to any risk professional and they’ll have hair-raising stories about people doing intensely idiotic things, believing themselves to be safe. (I once witnessed a person who could barely swim launch themselves into dangerous whitewater without any plan to get out, and, after being pulled out by their shoulders, continue on the trip blithely unaware that they had been moments away from drowning.) Any software engineer with a modicum of experience will tell you about launching software known to be unsafe or unstable, only to have management be genuinely shocked when the inevitable eventually happens. And then there’s the reams of social science research and pop psychology to back this up – Nicolas Taleb (Black Swan) has made a very lucrative career covering the “people suck at thinking about risk” beat, for example.

Experiences that challenge this framing

However, in the last couple of years, I’ve had some experiences that cut against this common narrative:

People have surprisingly keen assessments of their digital security risk

Over the last six months I’ve conducted dozens of digital security checkups (probably well over 100, but I stopped counting). My approach to risk during these calls has been to start with assuming that people’s concerns are valid, and moving directly to actions they can take to address those concerns. In other words, I don’t spend any time on questioning if their perceived risk is “correct” (unless that’s something they’ve explicitly asked for). If someone tells me they’re worried about being doxxed, I take that worry at face value and talk through what they can do to prevent and respond to a doxing.

This wasn’t a considered decision: it was purely pragmatic. I have 90 minutes with someone, who I’m almost certainly meeting for the first time, and I want that time to be as valuable as possible. Starting from an assumption that the risk analysis is correct, and moving immediately to giving them tools and techniques for risk reduction just seemed like a way to make the best use of the time.

What really surprised me was discovering that nearly everyone I spoke to seemed to have a pretty solid grasp of their risk! Almost nobody came to me with movie-plot threats or paranoid conspiracy theory nonsense. Instead, people had pretty keen assessments of their risk: activists were afraid of their communications being monitored; pregnant people were nervous about crossing state lines to get health care; therapists working with trans people were nervous about their EMR systems being compromised; content creators worried about being doxxed; and so forth. With very few exceptions, people came to me already zeroed in on exactly the threats I would have likely identified as the most risky to them, without any prompting or help on my part. If most people are bad at thinking about risk, why wasn’t I seeing that?

“Risky” outdoor activities are much less risky than they seem to an outsider

And then, I’ve had some shifting of my thoughts on risk through my outdoor adventures. In the last 3-4 years I’ve picked up some new forms of wilderness travel that carry more objective hazard: canyoneering, packrafting, and backcountry skiing. It’s common to characterize people who engage in those activities as “risk-seekers”, or as having a “high risk tolerance”, but I think that’s incorrect. Sure, a few people are — but they’re a minority. Communities around these sports have strong cultures of safety; the people who engage in these activities think about risk in a much more sophisticated and systematic way than I’ve encountered nearly anywhere else. To put the finest point on it: these communities are significantly better at thinking about risk than the software and security communities. I apply risk tools I learned outdoors to computer things far more often than I apply risk tools learned at a keyboard to my “risky” outdoor activities.

Many people — including me — are drawn to these activities not by the risk but by the challenge of risk mitigation. It’s tremendously satisfying and empowering to develop the skills that allow making something that could be dangerous acceptably safe. Many (most?) people who engage in these sports aren’t “risk-seekers”; rather, they’re “risk-mitigation-enjoyers”.

Return to the picture at the top of this post. If you’re unfamiliar with canyoneering, the “people suck at thinking about risk” framework would lead you to think that I’m most likely deluded about the risk. But the “trust people to assess their own risks” framework will lead you to conclude that we’ve thought carefully about doing something like this, and taken proper steps to mitigate risk. And indeed, if we spent time talking through the risks and the safety systems we’re using to mitigate them, I think you’d conclude, as I have, that the most dangerous part of the day was driving to and from the trailhead.

It’s counterintuitive, but I believe that engaging in activities with more hazards has made me more safe outdoors, rather than less. That’s because these activities have forced me to think about risk much more carefully, and learn more tools to mitigate those risks.

Implications of a “trust people to assess their own risks” narrative

So, what would it mean if, instead of assuming that “people suck at thinking about risk”, we started from a foundation of trusting someone’s risk assessment?

The major benefit is that we get to skip a whole complex risk analysis discussion and skip directly to the giving people tools to mitigate risk. Practically-speaking, I’ve found it very difficult to convince people to change their risk assessment; unless they’re explicitly asking for help calculating risk, telling someone they’re wrong about their risk assessment is unlikely to go over well. I once encountered a group of young men getting into a position for a fairly dangerous cliff jump – a jump that kills about one person a year1. I tried to convince them not to jump, but obviously that didn’t work. How often does telling someone they’re thinking about risk “wrong” actually work?

But what if that’s because people aren’t actually taking “too much” risk, but instead they’re taking the right amount of risk for them, and really only need better tools to manage their risk? What if instead of telling them not to jump I’d told them where the bolts were and encouraged them to get the gear to rappel next time?

This is basically a harm reduction approach to risk. Instead of trying to convince people to change their behavior wholesale, we give them tools to make their current behavior safer.

Here’s a good example from the security field: password management. This came up in most of the digital security reviews I performed; nearly everyone had questions here. There’s a “correct” answer here according to conventional wisdom in the security industry: use a password manager, and use different passwords for every site. But the people I spoke to had a very wide range of digital expertise and – let’s be real here – password managers aren’t exactly the easiest piece of software to use. Nearly everyone I’d spoke to knew about password managers, most had tried them, but many had had terrible experiences of getting locked out of something important, and had fallen back to some other technique.

The “people suck at thinking about risk” framing would tell us that these people are making incorrect risk/reward judgements. Yes, password managers are hard to use, but the risks they mitigate make it worth it to power through. This framing would tell us we need to help these people understand that risk better, and once they see how risky their behavior is, they’ll choose the password manager.

Man, I don’t know. That feels pretty dismissive of the actual lived experience of trying to use the tools balanced against that actual practical real-world impact of an account breach (which, for most people, ends up being more of an annoyance than a life-changing event). So the approach I took was to assume that people were basically correct about their risk/reward judgement, and to try to give them suggestions and nudges to improve their current behavior. If they were using the same password everywhere, I’d explain about credential-stuffing attacks, and encourage them to use a unique password on a few of their most important accounts (email, banking, medical records, etc.). If they were writing passwords down somewhere, I’d help them make sure that it was a good somewhere. If they were using a password manager but struggling, I’d help debug or suggest an easier-to-use password manager. And so on.

Is this the “right” approach? At the end of each of these calls, very few of the folks I spoke to were doing the best possible thing according to conventional wisdom. By that mark, no, not great. But nearly everyone I spoke to about passwords left the call a little better protected than they were earlier that day. I think if I’d tried to browbeat people into using password managers I’d have had little success. A world where I help dozens of people make a modest improvement feels better to me than one where I’m only able to convince a small handful to make a huge improvement.

There’s a pragmatic wisdom to starting from a position of trust. Maybe people aren’t always making perfect risk decisions, but treating their concerns as legitimate is often the only way to have a productive conversation about risk at all.

This is a conversation-starter, not a strong argument

I must admit that this approach makes me deeply uncomfortable in some contexts. I can’t bring myself to “trust people’s risk assessment” about vaccines, for example. There’s objective scientific evidence here, and anti-vaxxers are just totally wrong.

So I’m not trying to make a strong argument here that “trust people’s risk assessment” is a better way of framing risk than “people suck at thinking about risk”. But I am arguing that we should keep both framings in mind, and probably choose the “trust” approach more often than we do.

What do you think? Get in touch!


  1. Punchbowl Falls on the Eagle River near Portland. The waterfall is stronger than it looks and can hold people down; the cliff is less vertical than it looks and people sometimes hit the wall; and there are often invisible submerged logs in the pool. On a warm summer day it looks like a great easy super fun jump, but it’s significantly more dangerous than it looks. ↩︎