I regularly read a blog from a programmer called Simon Willison and recently he posted about something he refers to as Cognitive Debt. I prefer the term Control Lockout as it seems to me to reflect the central danger of control loss.
Control Lockout happens when organisations or individuals gradually lose understanding of systems they themselves are responsible for. The distance between those who operate the system and those who build it becomes so wide that the system becomes opaque. It still functions. It may even function brilliantly. But the understanding no longer resides inside the organisation. In planning that has happened significantly with the central record of control system that most planning authorities use as their core system albeit it wasn’t AI generated.
Complex software stacks, outsourced platforms, and vendor-managed systems have always created this risk. What is new is the velocity. AI coding tools can generate entire architectures in minutes. That acceleration dramatically increases the risk that systems are deployed faster than they are understood.
The danger might not be that AI code is bad. Often it’s good. The danger is that it can bypass the slow, painful process through which humans build mental models. When that learning process is skipped, control migrates away from people and toward tools.
And that is the essence of Control Lockout.
If that risk is real — and I believe it is — what can be done?
1. Reconstruct Mental Models Internally
Even if AI generates the code, teams must be able to explain the system from first principles. Architecture diagrams, walkthrough sessions, internal documentation, and “explain it back” exercises should not be optional.
2. Rotate Ownership
Knowledge must be distributed. If only one developer understands the system — or worse, only the AI does — resilience collapses. Rotating ownership and cross-reviewing code forces shared understanding.
3. Enforce Explainability
If a team cannot clearly describe why something works, how data flows, or where failure points exist, that is a governance issue — not just a technical one. Explainability should be treated as a control requirement.
4. Design for Comprehension
This may be the most important principle. Systems should be structured so they can be understood. Simplicity is not aesthetic; it is strategic. Clean schemas, predictable naming, transparent logic — these are not nostalgic habits. They are safeguards against lockout.
5. Periodically Test Independence
Can the process operate without the system?
Do people know the minimum information required to perform the task?
Could the team reconstruct the workflow from first principles if required?
Simon Willison recently admitted to creating AI projects that he himself did not understand. He also wrote a blog post on it at his great blog Simon Willison Blog He also brought my attention to an academic’s experience of the very same thing Margaret-Anne Storey


















