Editorial note
Carefully framed- Some examples are deliberately abstracted to keep the judgement useful without exposing private systems, people, weaknesses or operational detail.
- Any live risk items or unresolved issues
- Risk scores, named owners or internal governance documents
- Internal reporting lines, meeting detail or leadership discussions
1. Grounded opening
Most organisations are not short of cyber risk language. They are short of cyber risk conversations that force a decision.
Risk communication usually fails in one of two directions. The first is drama. Everything is urgent. Everything is high risk. Every issue is presented as if the organisation is one bad afternoon away from a serious incident. The second is fog. The language becomes cautious, abstract and technically dense enough that nobody can quite tell what needs to happen next, except that somebody, somewhere, should probably take security seriously.
Neither version helps much.
The dramatic version creates noise without discipline. The vague version creates distance without ownership. Both allow people to leave the room feeling that cyber risk has been discussed, while no real decision has been forced into the open.
That is why I have become more interested in the quality of cyber risk communication than in the volume of it. In live environments, risk communication is not a side skill. It is part of whether governance works at all. If the risk cannot be translated into a decision, then the organisation is not really managing it. It is circling it.
2. What the issue actually is
The weak version of the problem is easy enough to describe. Security teams struggle to explain technical issues to non-technical leaders.
That is true, but it is not the main issue.
The stronger version is that a lot of cyber risk communication still avoids the hardest part of the job: making consequence, ownership and trade-offs explicit enough that leadership has to choose something.
It is quite possible to produce regular updates, maintain a risk register, summarise concerns and refer to controls without making a single decision easier. That is the failure mode that matters most. The communication sounds responsible, but it has not improved the organisation’s ability to act.
Part of the reason this happens is understandable. Cyber risk is often uncertain, technical and entangled with infrastructure, suppliers, user behaviour and operational dependency. It does not always collapse neatly into one sentence. But that complexity is exactly why the communication needs to be stronger, not weaker.
Useful risk communication does not mean reducing everything to slogans. It means answering a more serious question: what is the organisation being asked to accept, change, fund, delay, prioritise or own?
Once that becomes the standard, a lot of familiar cyber language starts to look thinner than it first appeared.
3. Why it matters in practice
This matters because weak risk communication produces weak decisions even when the technical analysis behind it is sound.
If a risk is described in purely technical terms, leaders may understand that something is wrong without understanding what they are being asked to do about it. If it is described too dramatically, they may react to the tone rather than the substance. If it is described too cautiously, they may interpret the lack of clarity as a signal that the issue can wait.
In practice, this has consequences well beyond the security conversation itself.
Priorities drift. Remediation stalls. Ownership becomes blurred. Evidence gets thinner because follow-through is weaker. Governance begins to look more active than it really is. The risk register fills up, but the organisation’s operating position does not improve at the same rate.
That is one reason I think cyber governance is often judged unfairly. People sometimes say governance produces paperwork rather than progress. Sometimes that criticism is lazy. Sometimes it is earned. If the communication around risk does not change what people decide, what they own or how work is sequenced, then the governance layer is not doing enough useful work.
There is also a leadership point here. Senior leaders do not need every technical detail to make a good decision, but they do need the issue framed in operationally honest terms. What is exposed? What does it affect? What are the realistic options? What does delay mean? Who owns the next move? What level of confidence do we actually have?
That is not simplification for its own sake. It is decision discipline.
4. What had to be balanced
Good cyber risk communication is harder than it sounds because it has to balance things that pull in opposite directions.
You need enough clarity to force a decision, but not so much compression that the issue becomes misleading. You need enough urgency to reflect real exposure, but not so much heat that everything starts sounding equally critical. You need technical honesty, but not technical sprawl. You need leadership language, but not the kind that turns operational concerns into management theatre.
There is also a balance between certainty and usefulness. Some risks are clear. Many are not. There are times when the evidence is partial, the dependencies are awkward and the consequences depend on factors that are still moving. That uncertainty has to be communicated honestly. But uncertainty cannot become a hiding place. If the decision is accept this for now, fund remediation, change the timeline, assign a clear owner or escalate because the current control position is not defensible, then the communication still needs to make that visible.
Another balance is between cyber framing and operational framing. A lot of security issues are really service issues once you look at them properly. They affect continuity, support, accountability, supplier dependence or the credibility of internal control. If the communication keeps the issue trapped inside security language, leadership may miss the fact that the decision belongs to the wider operating model.
That has become more important to me over time. The stronger the link between infrastructure and governance, the more careful you have to be about how risks are described. Otherwise, decisions get treated as narrow security matters when they are really decisions about service resilience and organisational tolerance.
5. What changed or what the work clarified
What this work clarified for me is that cyber risk communication gets better the moment you stop treating it as status reporting.
The most useful shift was moving away from “here is the concern” and towards “here is the decision this concern requires”. That is a higher standard. It forces ownership into the conversation much earlier. It forces consequence to be stated more plainly. It also forces the person presenting the risk to think more carefully about what outcome they are actually seeking.
That does not mean every issue needs to be dramatised into a board-level dilemma. Quite the opposite. Often the discipline is in making the issue smaller, clearer and more specific. What exactly needs approving? What exactly needs changing? What exactly is being tolerated for now, and on whose authority?
The other useful change was paying closer attention to evidence. Not just evidence that a risk exists, but evidence that the organisation has responded proportionately. That is where communication becomes more mature. A weak conversation ends with concern being noted. A stronger one ends with an owner, a direction, a review point and a clearer basis for follow-up.
Formal governance thinking sharpened this rather than replacing it. Risk registers, control ownership and review cadence are only useful if the underlying communication is good enough to support them. Otherwise they become containers for ambiguity. The document exists, but the decision quality remains weak.
That is where I think a lot of organisations underperform. They build the structure of cyber governance before they build the standard of communication needed to make that structure useful.
6. What stayed messy
None of this makes cyber risk communication clean.
Some risks are genuinely awkward to explain because the consequences are spread across several services rather than one obvious failure point. Some remediation options are expensive, slow or operationally disruptive. Some ownership questions are uncomfortable because the issue sits between teams, suppliers or leadership layers. Some risks remain difficult to present because the honest answer is that the organisation is managing them imperfectly and has chosen to live with part of the exposure for now.
There is also the problem of audience fatigue. If leaders hear too many issues framed as urgent, the standard of urgency starts to collapse. If they hear too many issues framed too softly, they stop expecting the communication to drive anything concrete. Recovering from either pattern takes time.
And then there is the issue of confidence. It takes judgement to communicate risk clearly without overstating certainty. That is not just a communication skill. It is an operating skill. It depends on how well the environment is understood, how honest the evidence is and whether the organisation is willing to name trade-offs rather than hide behind them.
That messiness does not weaken the argument. If anything, it strengthens it. The point of good risk communication is not to make cyber work sound neat. It is to make difficult choices harder to avoid.
7. Broader lesson
The broader lesson is that cyber risk communication should be judged by its effect on decisions, not by how complete, technical or serious it sounds.
That is a useful corrective because cyber language can be very persuasive without being very operational. It can sound mature while leaving ownership vague. It can sound cautious while leaving consequences unclear. It can sound urgent while failing to distinguish between what needs immediate action and what needs controlled follow-through.
Once you judge it by decision quality, the standard changes.
A good risk communication should help leadership decide whether to accept, reduce, transfer, fund, prioritise, delay or escalate. It should make clear who owns the next move, what trade-off is being proposed and what kind of review is needed afterwards. It should also connect the security issue back to the operating reality around it: continuity, support, governance, supplier dependence or service exposure.
That is where infrastructure leadership and cyber governance meet. The risk conversation stops being a specialist warning and becomes part of how the organisation runs itself.
8. Closing
I do not think cyber risk communication fails mainly because leaders are uninterested or because technical teams cannot explain themselves. I think it fails when everyone leaves with concern but without a decision.
That is the standard I keep coming back to.
If the conversation ends without a clearer owner, a clearer choice or a clearer basis for follow-up, then it has probably produced less value than it sounded like it did. If it drives a real decision, even an uncomfortable one, then it is doing the work governance is supposed to do.
That is when cyber risk communication stops being commentary and starts being management.
Contents
Read next
About the publication
I write about infrastructure, security, governance and service delivery in complex organisations, with a focus on how decisions hold up under real operational pressure.