.png)
In a recent workshop, we brought together planning officers from five local authorities to examine validation challenges. The goal wasn't to present solutions but to understand the operational realities of planning officer’s work and to highlight where tech can be applied to make improvements in day to day operations.
The workshop included discussions about current challenges and future requirements. What emerged were clear patterns about the gap between initially, how AI validation is often discussed and what planning officers actually need from these tools in operational practice.
Consistency matters more than speed
Validation work involves checking applications against extensive requirement lists that vary by application type, local policy, and jurisdiction-specific constraints. The challenge isn't necessarily the time each individual check takes but ensuring comprehensive coverage across different application types and maintaining consistency across team members.
AI validation that delivers partial checks quickly creates more work than it saves. If a tool checks 80% of requirements in seconds but misses the remaining 20%, officers must still perform complete manual verification. The time saved on automated checks is lost to the uncertainty about what wasn't checked.
Systems must be built on complete requirement sets, not optimised solely for processing the most common issues. Officers need confidence that comprehensive validation has occurred, and that confidence requires completeness and auditability.
Workflow integration is non-negotiable
Planning officers operate within established systems and time-constrained workflows. Tools requiring separate platforms face substantial adoption barriers, not because officers resist new technology, but because operational efficiency depends on minimising workflow interruption.
The pattern is consistent: the best standalone tool loses to adequate capabilities embedded in existing workflows. Officers work primarily in planning case management systems like Idox Uniform and Idox Cloud. These are where applications arrive, where validation occurs, and where decisions are recorded.
AI validation capabilities must exist within the systems where planning work happens. Platform partnerships like the collaboration between Planda and Idox aren't about adding credibility. They're about ensuring validation tools appear at the right moment in the workflow, in the interface officers are already using, without requiring them to go elsewhere.
Professional accountability shapes tool requirements
Planning officers hold professional accountability for validation decisions. They're exercising professional judgment about whether applications meet requirements and whether issues warrant invalidity. They're professionally and sometimes legally accountable for these decisions.
This creates specific requirements for AI validation tools. Officers need systems that support professional judgment, not systems that attempt to replace it. They remain the decision-maker; AI provides decision support.
AI validation that produces conclusions without showing reasoning creates accountability gaps. Officers can't defend decisions they don't understand or explain to applicants why an application is invalid if the AI provides no citation. Effective AI validation must provide transparency about how conclusions were reached, allow officer review and modification, and maintain officer control throughout.
Invalidity categorisation enables better outcomes
Different types of invalidity have different operational implications. Some issues can be resolved quickly: a missing location plan, an incorrect fee, or an outdated form. The resolution pathway is clear and typically fast.
Other invalidity issues indicate fundamental problems requiring application withdrawal and resubmission. Plans showing the wrong site or applications for development outside the applicant's ownership cannot be corrected through simple amendments.
AI validation that simply flags "invalid" without distinguishing between these categories doesn't match how officers work. In practice, officers triage invalidity by resolution complexity. AI validation should categorise invalidity by type, severity, and typical resolution pathway, enabling officers to route different issues appropriately.
Documentation and auditability are essential
Officers must explain validation decisions to applicants, agents, and, through formal processes, including appeals. This requires specific policy citations, requirement references, and documented reasoning.
AI validation that produces findings without this documentation creates additional work. Officers must then research the requirement, locate the policy citation, and construct the explanation themselves.
Officers need audit trails showing how validation decisions were reached. Effective AI validation must provide requirement-level specificity, citing the validation checklist item and the policy document requiring that information. This ensures outputs match the documentation standards professional practice requires.
Building with the sector
The government's £10M procurement emphasises several of these elements. The tender specifications highlight integration with existing planning systems, support for professional decision-making processes, and tools that assist rather than replace officer judgment. These requirements align closely with the operational needs that emerged from our workshop.
As the sector explores AI validation through government procurement, direct partnerships, or system integrations, these operational requirements should continue driving development priorities. Technology that doesn't fit professional practice won't be adopted, regardless of its technical sophistication.
For planning teams exploring how AI validation might address operational needs, we welcome conversations about requirements and implementation approaches. Learn more here