Editorial note
Carefully framed- Some examples are deliberately abstracted to keep the judgement useful without exposing private systems, people, weaknesses or operational detail.
- Named post-go-live issues, internal review records and detailed support outcomes.
- Supplier-specific performance detail and exact project retrospectives.
- Live service weaknesses that would reveal current estate posture.
1. Grounded opening
Projects tell their neatest story before they have to live with themselves.
During delivery, the narrative is usually clear enough. A migration. A platform replacement. A rollout. A support improvement. The project plan, the steering language and the success criteria all tend to reinforce the same version of events.
Then the service goes live, users begin behaving like users again and the estate starts answering back. At that point, the project often reveals that it was not mainly what people first thought it was. The migration turns out to have been a training problem. The rollout turns out to have been an ownership problem. The improvement turns out to have depended on documentation, adoption or support aftercare more than the delivery slides suggested.
That is why I think post-implementation reviews matter so much. They are where the service gets a vote.
2. What the issue actually is
The weak version of the argument is that post-implementation reviews help teams learn lessons.
That is true, but it is vague enough to become ceremonial. The stronger version is that post-implementation review is often the first moment the organisation sees the project in operational truth rather than project language.
That matters because project language is usually optimistic by necessity. It compresses complexity. It narrows the focus. It highlights decision points and deliverables. Live service does not behave that way. It exposes support cost, adoption friction, hidden dependency, record weakness, training assumptions and the distance between technical completion and useful completion.
A serious post-implementation review is not just asking whether the plan was followed. It is asking what the project actually changed once real use, real support and real dependency came back into the room.
3. Why it matters in practice
This matters because future decisions are only as good as the honesty of the last review. If the organisation records success too early, it carries forward the wrong lesson. The same pattern then reappears under a different project name and people act surprised when the same aftercare problems return.
It also matters because post-implementation review is one of the few places where support and governance can talk back to delivery on equal terms. The service does not care that the milestone was met if ownership remained murky afterwards. Users do not care that procurement closed cleanly if adoption still drifted. Leadership should care because those are the real conditions of service quality.
At Head of IT level, this is portfolio hygiene as much as project discipline. Good reviews stop the organisation from flattering itself with the wrong version of success.
That is why I think review belongs inside senior infrastructure leadership rather than being treated as closure paperwork.
4. What had to be balanced
The first balance is between honesty and defensiveness. Reviews fail quickly if people think they exist only to assign blame. The point is not to humiliate delivery teams. The point is to let the live service correct the project narrative before the organisation starts building new decisions on a false memory of what happened.
There is also a balance between evidence and speed. If review happens too late, important detail is lost. If it happens too early, the service has not yet said enough. Judging that timing well is part of the discipline.
Another tension sits between closure and continuity. Organisations like the clean feeling of moving on. Good reviews make that harder, because they insist that some of the real work only becomes legible after the visible work is supposedly complete. That is uncomfortable, but useful.
This is one reason I prefer reviews that ask what the project turned into operationally, not only whether the original plan was delivered faithfully.
5. What changed or what the work clarified
What this work clarified for me is that post-implementation review is one of the strongest antidotes to project self-flattery.
I am much more interested now in what the live service reveals than in whether the delivery story still sounds neat. Did the change reduce support friction? Did adoption behave as expected? Did ownership become clearer? Did documentation tell the truth afterwards? Those questions are often more revealing than the original project status ever was.
It also clarified that reviews can reclassify a project usefully. Something framed as a technical rollout may reveal itself as a service-adoption problem. Something presented as a tooling improvement may prove to have been a governance or ownership issue in disguise. That reclassification is not a failure. It is the point of reviewing properly.
The better the review, the less likely the organisation is to repeat the same mistake under a cleaner project name later.
6. What stayed messy
No review process removes the awkwardness entirely. People remember projects selectively. Support noise can be interpreted differently by different groups. Some outcomes are still ambiguous because the change interacted with wider organisational behaviour that no project could fully control.
There is also a cultural problem. Once a project has been declared successful, there is often quiet resistance to reopening it with a more honest operational lens. That is understandable. It is also how weak lessons survive.
Good review work does not make that discomfort disappear. It makes it worth dealing with.
7. Broader lesson
The broader lesson is that implementation is often the least reliable narrator of its own value.
Real learning begins when the service becomes live enough to contradict the project story. That is why post-implementation review deserves more seriousness than a final administrative checkpoint usually gets.
It is one of the few mechanisms that lets infrastructure leadership move from delivery memory to operational truth before the next decision is made.
8. Closing
I do not think a project is fully understood on the day it goes live.
It is understood later, when the service begins to reveal what the change actually solved, what it merely relocated and what it quietly left behind for operations to carry.
That is why post-implementation reviews are where you find out what the project really was.
Contents
Read next
About the publication
I write about infrastructure, security, governance and service delivery in complex organisations, with a focus on how decisions hold up under real operational pressure.