
Tobias Lehner needed a way to inspect 200,000 different profile variants coming off REHAU's automotive parts production lines.
Manual visual inspection caught roughly 80% of defects, but the company manufactures polymer profiles in thousands of colors and designs.
The inspectors checked quality only at bundle ends or by sampling.
Lehner, Smart Technologies Engineer at REHAU Industries, said rule-based inspection wouldn't work because "all conceivable defects would have to be known in advance and programmed in."
REHAU deployed Fujitsu's AI-based vision system in 2024.
Detection went to 99.32%.
But during the first month the system flagged surface variations that human inspectors had been passing for years, and it took several weeks before Lehner realized the higher accuracy had created a different problem.
👋🏻 I'm Leonardo Ubbiali. This week we're looking at what REHAU discovered about the tradeoff between catching every defect and scrapping parts that work fine, and why that tradeoff gets harder as accuracy goes up.


REHAU's manual inspectors had been making judgment calls for years about which surface marks mattered and which didn't.
Those judgment calls weren't documented anywhere.
When REHAU evaluated automated optical inspection before adopting AI, they found traditional rule-based systems impractical because “all conceivable defects would have to be known in advance and programmed in.”
The AI approach was better because it could learn from examples.
But REHAU discovered that learning from examples meant the system didn't automatically know which defects to ignore.
The false positive problem
During the initial deployment, REHAU's AI system flagged surface variations that didn't affect function.
A small scratch on a profile that would be hidden inside a window frame got the same treatment as a crack that would cause failure.
Manual inspectors knew the difference because they understood where the part would be used.
That context wasn't in the training data.
REHAU had two choices: tune the system down and miss some real defects, or keep sensitivity high and scrap good parts.
The difference between 99% and 99.32% accuracy sounds small.
It includes both more real defects caught and more good parts flagged incorrectly.
The pattern showed up everywhere
I started looking for other companies hitting the same problem and found Medtronic dealing with it in cardiac monitors.
Medtronic deployed AccuRhythm AI algorithms and reduced false alerts from 74.1% to 88.2%.
This saved clinicians 186 hours per year per 200 patients.
The cardiac monitor business now generates $600 million in annual revenue, and Martha noted the business took off after they fixed the false positive problem.
For Medtronic the cost was clinician time reviewing alerts that turned out to be nothing, and for REHAU it was scrap costs from pulling good parts off the line.
But the pattern was the same: higher detection catches more real problems and more false alarms.
You have to tune the system to balance both.

REHAU ran the AI system in parallel with manual inspection for several weeks.
They analyzed every flag.
Actual defect, cosmetic issue, lighting artifact, normal material variation.
Then they trained the AI on those categories so it could distinguish between marks that matter and marks that don't.
For REHAU the question wasn't whether to deploy AI inspection, it was how to tune it so false positives didn't wipe out the savings.
A supplier I spoke to last week runs medical-grade parts at 99.5% sensitivity and industrial parts at 98%.
They tested both settings for two weeks.
Maximum sensitivity on industrial parts increased scrap by 14% but only caught three additional defects.
The economics didn't work.
For safety-critical parts you accept higher scrap to avoid defects reaching customers, but for commodity parts the cost of false positives exceeds the cost of occasional returns.
Five things you can do this quarter


The problem: You're deploying AI vision inspection but don't know how to tune sensitivity without creating a scrap problem.
What you need:
Production data from the last quarter
Current defect and scrap rates
10 minutes
The prompt (copy this):
I'm a [ROLE] at a [FACILITY TYPE] plant deploying AI vision inspection.
Current data:
Manual inspection defect detection: [%]
Monthly scrap rate: [%]
Customer return rate: [%]
Average scrap cost per part: [$]
Average return/warranty cost per part: [$]
We're considering moving to AI inspection with 99%+ detection accuracy.
How much could scrap increase if the system flags 0.5% more parts than manual inspection?
What's the breakeven where higher scrap costs exceed the savings from catching defects?
Should we run different sensitivity settings for different product types?
How long should we run parallel testing before switching over?

Analysis of how higher detection affects scrap economics, breakeven calculations for different sensitivity levels, and a parallel testing timeline.


REHAU's AI system now detects 99.32% of all defects across more than 200,000 profile variants.

Fujitsu Quality Inspection: REHAU Industries Case Study
Case study documenting how REHAU deployed AI-based visual inspection across production lines manufacturing 200,000+ polymer profile variants. Covers the challenge of inspecting products at high speed where manual inspection only catches samples, how they trained the AI to distinguish between defect types, and the multi-week parallel testing process before full deployment.
Time to value: 10 minutes
Are you running AI vision inspection on any lines? What's your false positive rate looking like?
Hit reply. I read every email.
Leo




