Testing AI Systems Since 2019

Back when most companies were just starting to talk about machine learning, we were already knee-deep in the messy work of figuring out how to test these systems properly.

It started small. A handful of engineers in Berlin who couldn't find good training on AI testing anywhere. So we built our own curriculum, tested it with real projects, and refined it based on what actually worked in production environments.

Today we're still those same engineers who believe that solid testing practices matter more than flashy promises.

AI testing team working on system validation protocols
2019

The Beginning

Three QA engineers kept running into the same problem. Traditional testing methods broke down completely when applied to AI systems. Models would pass all tests in staging and then make bizarre decisions in production. We needed a different approach, so we started documenting what actually worked when testing neural networks.

Early days of AI testing research and documentation
2022

Building Structure

After three years of trial and error on commercial projects, we had enough material for a proper curriculum. Not theory from research papers, but practical frameworks that held up under production pressure. We started teaching small groups, mostly experienced developers who needed to pivot into AI testing roles. The feedback loop helped us refine what actually mattered versus what sounded impressive but didn't translate to real work.

Development of structured AI testing curriculum
2025

Current Focus

We're training professionals who need to understand how AI systems can fail in unexpected ways. Our programs run throughout 2025 and into early 2026, covering everything from basic model validation to complex autonomous system testing. The work has gotten more challenging as AI systems become more sophisticated, which honestly makes it more interesting.

Modern AI testing training programs and methodology

Who Teaches This Stuff

Our instructors spend half their time teaching and half their time working on actual AI testing projects. This keeps the curriculum grounded in current industry practices rather than outdated methodologies.

Renata Kozlov, Lead AI Testing Architect

Renata Kozlov

Lead AI Testing Architect

Spent seven years breaking neural networks before anyone paid her to teach others how to do it properly. Specializes in finding edge cases that models weren't trained to handle. Built validation frameworks currently used by three automotive AI companies, though she's not allowed to say which ones. Runs the advanced testing methodology track starting September 2025.

Magnus Thorvald, Director of Testing Methodology

Magnus Thorvald

Director of Testing Methodology

Former machine learning engineer who switched to testing after watching too many models fail in production. Developed systematic approaches for evaluating training data quality and model behavior consistency. His testing protocols helped catch critical issues in recommendation systems before they reached users. Teaches foundational courses and leads curriculum development for programs running through 2026.