Case Study — AI Vision Product
I designed and built a live AI product solo — from zero to App Store. This is how it happened, and why the hardest problems were design problems, not engineering ones.
The most important screen
The Problem
I drove two hours to the river, waders on, fly box stocked, rod rigged — all the right gear. And I got completely skunked. Not because I didn't have the flies. Because I didn't have the knowledge to know which ones to tie on, or why.
On the walk back to my car I ran into a local guy packing up his truck. We got to talking. He showed me his setup — every fly, every pattern, and more importantly, the reasoning behind each one. He talked about hatches, about reading the water, about a book called Fish Food. It was a masterclass delivered in a parking lot. I was furiously trying to commit it all to memory knowing I'd forget most of it by the time I got home.
"You know what would be cool? If there was an app for that."
He said it offhandedly. He had no idea I was a product designer. I spent the entire two-hour drive home thinking about nothing else. The technology to do this existed — vision models capable of identifying insects and fly patterns from a photo. The problem wasn't the technology. It was that nobody had designed a product around it that actually worked in the context of the problem: standing in a river, one hand on a rod, the other on a phone, in variable light, with three seconds of patience.
I decided to build it. Not as a portfolio project — as a real product. For that local guy. For every angler who's had the gear but not the knowledge.
Product Strategy
Before designing a single screen, I had to decide how to frame the AI's job. Three options — and the choice shaped everything downstream.
Option C changed the entire design philosophy. If the model reasons rather than classifies, the interface has to show that reasoning — not just the conclusion. That decision led directly to every key design choice that followed.
Core Design Challenge
Most AI product design treats uncertainty as something to hide. The model returns a result; the interface presents it as fact. This works when stakes are low. It breaks when the user is about to make a real decision based on what the interface tells them.
In fly fishing, tying on the wrong fly for the conditions is a real cost — time, opportunity, confidence. If the model isn't sure, I needed the interface to say so clearly and usefully, not silently paper over it. My answer was a three-part framework:
Three Surfaces
HatchMatch covers three use cases that share a data model and interaction language — but each has structurally different output requirements.
Fly ID Result
Hatch Analysis
Flybox Scan
Key Screen — Most Important
Key Screen — Flybox Scan
Showing confidence scores at the pattern level in an inventory scan was a deliberate choice to extend the confidence framework from single-item ID to collection analysis. Each pattern in the result carries its own confidence score, making the limitations of the scan visible and honest.
A user can see immediately that the system is 80% confident about the Nymph group but only 70% confident about the Beadhead Nymph count — and make decisions accordingly. The "est." labels on summary stats reinforce this: the product is honest about what it knows and what it's approximating.
"The flybox scan isn't just an inventory tool. It's a gap analysis — No other fly fishing app does this."
The Full Flow
Onboarding
Three value props. Three surfaces. Users arrive with a mental model already built.
The Loading State
A 2–4 second Vision API call is long enough to lose someone. "Hatch Matching..." keeps them inside the product creating a sense of expectation.
Home Screen
One CTA. Full-bleed imagery. Four words that are the entire product promise: "Less guessing. More confidence."
Flybox Library
Every scan saves. The app gets more valuable every time you use it.
Takeaways