When a recommendation needs to stand up to a disrepair claim, a regulatory inspection, or a board question, the reasoning behind it needs to be visible.
Housing professionals have legitimate concerns about AI-generated recommendations. Those concerns are not unfounded, and they deserve a direct response rather than reassurance by repetition.
The concern is essentially this: if an algorithm produces a recommendation, 'this home needs a ventilation intervention' or 'damp risk is primarily driven by underheating,’ and nobody can explain why it reached that conclusion, then the recommendation is difficult to act on confidently, impossible to challenge constructively, and potentially dangerous in situations where decisions carry legal or reputational weight.
That concern is right. And it is the reason that not all AI-assisted tools carry the same risk profile.
There is a meaningful difference between AI that produces outputs from pattern matching in ways that are difficult to trace, and AI that applies transparent, logic-led analysis where the reasoning can be described in plain terms.
In housing, that distinction is not academic. It is operational and legal.
When a damp and mould case escalates to a disrepair claim, the question is not just what decision was made. It is why it was made, on what basis, and whether that basis was reasonable. A case record that shows an algorithm flagged risk and a team acted accordingly is a weaker position than one that shows what evidence was present, what logic was applied, and what the recommendation was and why.
The same applies to regulatory inspection. The Regulator of Social Housing's consumer standards expect providers to demonstrate that their processes for identifying and responding to hazards are sound, not just that technology was deployed.
COSIE homes Root Cause Analysis is grounded in building physics, not opaque pattern recognition. It looks at real in-home data; temperature, humidity, dewpoint behaviour, and how conditions change over time, and applies a logical framework that reflects how damp and mould risk actually develops in homes.
It considers whether ventilation is performing adequately by examining whether moisture levels recover appropriately after occupancy events. It assesses whether indoor temperatures are consistently sufficient to prevent condensation risk at the surfaces most vulnerable to it. It considers moisture load patterns relative to the thermal envelope of the home. And it identifies whether one factor appears dominant, or whether risk is being driven by a combination of conditions.
The logic behind each conclusion can be described in plain English. It is not a black box output. It is a structured analytical process that produces a reasoned recommendation, and that reasoning is available to support the decision it led to.
For compliance leads: when a case is reviewed, the rationale for the recommended action is documented, time-stamped, and traceable. That strengthens the case record.
For building safety teams: the factors considered in an RCA output can be discussed and reviewed. If a team disagrees with a recommendation, they can understand why it was made and challenge it on the same terms. That is not possible with opaque systems.
For housing executives: the reputational risk of AI in housing is real, but it is concentrated in tools where nobody can explain what the system did or why. Transparent tools, where the logic is visible and the recommendation is supported by traceable evidence, carry a fundamentally different risk profile.
There is an operational argument here too, separate from legal and governance considerations.
An operational team is far more likely to act on a recommendation if they can understand why it is being made. 'Ventilation is likely underperforming because moisture levels are not recovering adequately between occupancy events' is a recommendation a repairs team can use. It tells them what to inspect and what to look for.
'High risk & action recommended' is less useful. Without the reasoning, the recommendation does not support better decisions. It just creates another flag to follow up.
COSIE data has been used successfully to defend disrepair claims in court. That is possible because the data and the logic behind it can be presented, explained, and tested. That is the practical standard that AI-assisted housing tools should be held to. Find out more about Root Cause Analysis Reporting here.