Case Study — AI Vision Product
I designed and built a live AI product solo — from zero to App Store. This is how it happened, and why the hardest problems were design problems, not engineering ones.
The most important screen
The Problem
I drove two hours to the river, waders on, fly box stocked, rod rigged — all the right gear. And I got completely skunked. Not because I didn't have the flies. Because I didn't have the knowledge to know which ones to tie on, or why.
On the walk back to my car I ran into a local guy packing up his truck. We got to talking. He showed me his setup — every fly, every pattern, and more importantly, the reasoning behind each one. He talked about hatches, about reading the water, about a book called Fish Food. It was a masterclass delivered in a parking lot. I was furiously trying to commit it all to memory knowing I'd forget most of it by the time I got home.
"You know what would be cool? If there was an app for that."
He said it offhandedly. He had no idea I was a product designer. I spent the entire two-hour drive home thinking about nothing else. The technology to do this existed — vision models capable of identifying insects and fly patterns from a photo. The problem wasn't the technology. It was that nobody had designed a product around it that actually worked in the context of the problem: standing in a river, one hand on a rod, the other on a phone, in variable light, with three seconds of patience.
I decided to build it. Not as a portfolio project — as a real product. For that local guy. For every angler who's had the gear but not the knowledge.
Product Strategy
Before designing a single screen, I had to decide how to frame the AI's job. Three options — and the choice shaped everything downstream.
Option C changed the entire design philosophy. If the model reasons rather than classifies, the interface has to show that reasoning — not just the conclusion. That decision led directly to every key design choice that followed.
Core Design Challenge
Most AI product design treats uncertainty as something to hide. The model returns a result; the interface presents it as fact. This works when stakes are low. It breaks when the user is about to make a real decision based on what the interface tells them.
In fly fishing, tying on the wrong fly for the conditions is a real cost — time, opportunity, confidence. If the model isn't sure, I needed the interface to say so clearly and usefully, not silently paper over it. My answer was a three-part framework:
Three Surfaces
HatchMatch covers three use cases that share a data model and interaction language — but each has structurally different output requirements.
Fly ID Result
Hatch Analysis
Flybox Scan
Key Screen — Most Important
Key Screen — Flybox Scan
Showing confidence scores at the pattern level in an inventory scan was a deliberate choice to extend the confidence framework from single-item ID to collection analysis. Each pattern in the result carries its own confidence score, making the limitations of the scan visible and honest.
A user can see immediately that the system is 80% confident about the Nymph group but only 70% confident about the Beadhead Nymph count — and make decisions accordingly. The "est." labels on summary stats reinforce this: the product is honest about what it knows and what it's approximating.
"The flybox scan isn't just an inventory tool. It's a gap analysis — you can see at a glance what you have too much of, what's running low, and how confident the system is about each count. No other fly fishing app does this."
The Full Flow
Onboarding
The three value props map directly to the product's three surfaces — so by the time users create an account they already have a working mental model. The beta disclaimer at the bottom is honest expectation-setting at exactly the right moment, not buried in settings.
The Loading State
A vision API call takes 2–4 seconds. Not long enough for a progress bar, but long enough that a generic spinner erodes confidence. The branded loading state — deep blue field, orange sparkle, verb-as-label "Hatch Matching..." — holds the user inside the product's world during the one moment they're most likely to second-guess whether the app is working.
Home Screen
Full-bleed imagery, a single dominant CTA, and a three-tab nav for instant access to Camera, Home, and Flybox. The tagline "Less guessing. More confidence." is the entire product promise distilled to five words.
Flybox Library
Every identification saves to a searchable, filterable personal library. Entries are tagged by type (Bug, Fly, Fly Box) and date. The flybox becomes more valuable every time you use the app — a persistent, growing record of what you know about your gear.
Takeaways