Most of our blogs at Planda focus on our products, but today we want to take a step back and talk about something bigger: how we approach AI itself.
The Planda team is growing quickly, but at its core it is built around talented engineers with a passion for PropTech, urbanism, and the vibrant UK start-up scene. We founded Planda when we realised there was no meaningful use of AI applied to the vast planning data available in the UK. From day one, we committed to building AI responsibly.
Planda began as a project by our Head of AI, Edoardo, who is completing his PhD in Artificial Intelligence at King’s College London, focusing on the behaviour of large language models (LLMs). LLMs are AI systems trained on large amounts of text data to understand, generate, and process human language. His research for Planda initially explored trends in planning data and the way officers assessed applications, with the goal of better understanding whether a given planning application was likely to be approved.
From the outset, we followed a set of guiding principles to ensure AI was used ethically. Our goal has always been to present complex data clearly and accessibly. Each version of our Planda reports reflects this: they always include links to original planning applications and explain each data point in context. By making planning data easier to understand, we help homeowners and buyers make better informed decisions.
At first, this approach was intuitive rather than deliberate - we were building during the early days of the “gen AI boom.” It was reassuring, then, to see these same ideas echoed in the UK government’s AI Regulation White Paper and the work of the AI Safety Institute, which have since formalised many of the principles we were already applying.
Across governments and expert bodies, there is broad agreement on a core set of principles for responsible AI:
These principles are consistent with the UK government’s White Paper and frameworks from the AI Safety Institute, designed to ensure innovation protects both people and society.
Start-ups thrive on speed and creativity, but without guardrails, fast innovation can cause harm. For us, these principles are vital because:
This is the only sustainable way to innovate in regulated environments.
Designing with Safety in Mind
Every AI model we develop undergoes rigorous testing. We simulate edge cases such as lack of data or unusual user inputs, anything an estate agent or homeowner might encounter. For example, where data gaps occur, Planda reports fall back to council-level statistics, so users always gain meaningful insights.
Making AI Explainable
When Planda highlights risks or considerations for homeowners, we show where that information comes from. Our AI models synthesise refusal trends and rank them, they don’t invent them. We are actively working on updates to make this even more transparent.
Guarding Against Bias
We routinely audit our algorithms to ensure they don’t unfairly prioritise one set of data over another. Our models are designed to reflect trends at the local level: neighbourhood, area, and council.
Protecting Privacy
All data we use complies with GDPR. We apply data-minimisation techniques, processing only what’s necessary for a useful and safe user experience.
Keeping Humans in the Loop
Planda reports combine IBex data with AI-generated insights. AI helps to process large datasets and highlight patterns, but final interpretation is left to people. Our role is to surface relevant information so users can make informed decisions about their homes.
At Planda, we see transparency as an important part of safe innovation. The UK government has encouraged organisations and regulators to publish regular updates on how they are applying AI principles in practice, and we aim to follow that example. This article is our first step: setting out what these principles mean and how we apply them today. Over time, we aim to share more public documents that explain our approach, outline the safeguards we put in place, and show how we are adapting to new risks and responsibilities. By doing so, we hope to build trust not only with our partners, but also with the wider public.
Implementing the UK's AI Regulatory Principles
A pro-innovation approach to AI regulation Policy Paper