A selection of projects across industrial, telecom, and cultural sectors — showing what production AI actually looks like beyond the demo.
All clients anonymised. Sector, team size, and outcomes are accurate.
A precision laser technology manufacturer needed to automate visual quality inspection of components — reducing reliance on manual checking while making inspection results explorable by engineers and quality teams.
ChallengeInspection data existed but was siloed — images and results were stored without structure, making trend analysis or model training impractical. Engineers had no way to query historical inspection outcomes or identify recurring defect patterns.
What We BuiltFull MLOps lifecycle for the computer vision models: structured data pipelines, model training infrastructure, and a Svelte-based Image Explorer dashboard giving engineers interactive access to inspection results, defect heatmaps, and trend analytics. Models were deployed with monitoring hooks that flagged drift in defect rate distributions.
OutcomesA national telecommunications provider handling millions of customer connections needed to cut the time technical helpdesk agents spent diagnosing network complaints — complaints that often required cross-referencing data from multiple disconnected systems.
ChallengeAverage complaint handling time was 27 hours. Agents manually pulled data from 5+ systems, correlating network logs, customer records, and fault history. No unified diagnostic view existed.
What We BuiltMINDS — an AI-driven network diagnostics platform. Data pre-processing engines aggregated signals from multiple source systems in real time. Backend API microservices served a unified diagnostic view to helpdesk agents, with ML models surfacing likely root causes and recommended actions ranked by historical resolution success rate.
OutcomesA regional museum with a collection of 15,000+ objects had decades of inconsistent catalogue records — some detailed, many incomplete, some existing only as handwritten ledger entries. A digitisation grant created an opportunity to modernise, but the curatorial team of three had no capacity to manually rework 15,000 records.
ChallengeStandard keyword search returned poor results. Researchers couldn't find objects by material, period, or iconographic content. The museum couldn't participate in shared digital catalogues because their metadata didn't meet standard schemas (LIDO, Spectrum).
What We BuiltA GenAI-assisted cataloguing pipeline: vision-language model analysis of object photographs generating structured draft records, LLM normalisation of free-text descriptions into LIDO-compliant fields, and a curatorial review interface where staff confirmed or corrected AI-generated entries. The controlled vocabulary was grounded in Getty AAT throughout.
Outcomes