Setting the stage
Veriff was going through turbulent times. Leadership changes, a company-wide layoff, and shifting priorities created an atmosphere of uncertainty. Our team’s challenge was clear yet daunting—our resubmission rates were driving up costs, frustrating users, and tarnishing our reputation.
The problem
High resubmission rates weren’t just expensive; they also damaged trust. But what was causing users to fail their verification attempts?
Discovery
To find answers, I started with a deep dive into our data. I analyzed session breakdowns by country, resubmission rates, and the specific customers most affected. This gave me an initial glimpse into the scale of the problem. But numbers only told part of the story.
Next, I explored qualitative sources. I pored over customer tickets, listened to support calls, and even shadowed our manual verification team. Patterns began to emerge—poor image quality and process misunderstandings were leading contributors.
A quick win
We began by improving the manual verification process with ML. This helped our team catch errors faster and more consistently, reducing mistakes that led to resubmissions. It also proved that automation could empower humans rather than replace them.
Image capturing
Next, we tackled end-user image capture—a major contributor to verification failures. We designed prototypes that provided real-time feedback, helping users adjust lighting, framing, and glare before submission.
Through iterative testing, we fine-tuned these solutions, balancing usability with technical constraints. Resubmission rates began to drop as users succeeded on their first try.
Delivering results
We cut resubmissions by 23% and reduced fraud by 25% without any extra effort from users. There was also a 15% drop in session drop-offs, and the higher-quality images make manual verification much easier. Overall, it’s been a great success!