Case Study — AI Vision Product

HatchMatch.
The field guide that thinks.

I designed and built a live AI product solo — from zero to App Store. This is how it happened, and why the hardest problems were design problems, not engineering ones.

My Role
Sole Designer & Developer
Stack
React Native · Node.js · OpenAI Vision · Supabase
Timeline
2025 – Present
Status
HatchMatch onboarding screen
Hatch Matching loading state
Fly identification result screen

The most important screen

Insect ID hatch analysis
Flybox scan results

The right gear.
The wrong flies. A stranger's advice.

I drove two hours to the river, waders on, fly box stocked, rod rigged — all the right gear. And I got completely skunked. Not because I didn't have the flies. Because I didn't have the knowledge to know which ones to tie on, or why.

On the walk back to my car I ran into a local guy packing up his truck. We got to talking. He showed me his setup — every fly, every pattern, and more importantly, the reasoning behind each one. He talked about hatches, about reading the water, about a book called Fish Food. It was a masterclass delivered in a parking lot. I was furiously trying to commit it all to memory knowing I'd forget most of it by the time I got home.

"You know what would be cool? If there was an app for that."

He said it offhandedly. He had no idea I was a product designer. I spent the entire two-hour drive home thinking about nothing else. The technology to do this existed — vision models capable of identifying insects and fly patterns from a photo. The problem wasn't the technology. It was that nobody had designed a product around it that actually worked in the context of the problem: standing in a river, one hand on a rod, the other on a phone, in variable light, with three seconds of patience.

I decided to build it. Not as a portfolio project — as a real product. For that local guy. For every angler who's had the gear but not the knowledge.

The foundational decision:
what should the AI actually do?

Before designing a single screen, I had to decide how to frame the AI's job. Three options — and the choice shaped everything downstream.

Option A
Species database lookup
User identifies insect by name, app returns fly recommendations.
Problem: most anglers don't know the name. That's the whole point.
Option B
Pure vision classification
Train a closed model to classify known species.
Problem: regional variations and unnamed patterns make this brittle at exactly the moment it matters.
Option C — Chosen
Vision model as reasoning partner
Send the image to a multimodal model, ask it to reason about features, assess confidence, and generate recommendations from what it observes.
✦ Treat uncertainty as data, not failure.

Option C changed the entire design philosophy. If the model reasons rather than classifies, the interface has to show that reasoning — not just the conclusion. That decision led directly to every key design choice that followed.

Designing for uncertainty
without feeling broken.

Most AI product design treats uncertainty as something to hide. The model returns a result; the interface presents it as fact. This works when stakes are low. It breaks when the user is about to make a real decision based on what the interface tells them.

In fly fishing, tying on the wrong fly for the conditions is a real cost — time, opportunity, confidence. If the model isn't sure, I needed the interface to say so clearly and usefully, not silently paper over it. My answer was a three-part framework:

Principle 01
Confidence as a first-class signal
Surface the model's confidence level prominently in the result — not buried in a tooltip or hidden entirely. It's a feature, not a footnote.
Principle 02
Show the reasoning, not just the conclusion
Display the visual features the model detected so the user can evaluate whether the identification makes sense. Let them verify, not just trust.
Principle 03
Separate identification from recommendation
The model tells you what it sees. The product tells you what to do about it. These are different jobs and they live in different UI layers.

One product. Three distinct jobs to be done.

HatchMatch covers three use cases that share a data model and interaction language — but each has structurally different output requirements.

Surface 01
Single Fly or Insect ID
Photo to result in under five seconds. Detail-rich output: detected features, estimated size, recommended rig, pairing suggestions.
Surface 02
Hatch Analysis
Photograph a live insect on or near the water. The model identifies species and life stage, then translates directly into fly pattern recommendations for active feeding fish.
Surface 03
Flybox Inventory Scan
Photograph your entire fly box in one shot. The model identifies patterns across the whole image, groups and counts them, and returns a confidence-weighted inventory — a gap analysis before your next trip.
Fly ID screen

Fly ID Result

Hatch Analysis screen

Hatch Analysis

Flybox Scan Results

Flybox Scan

The Fly ID result.
Four decisions in one screen.

Fly Identification result screen
Decision 01
Confidence as a badge, not a number
A percentage invites anxiety. A badge — High Confidence or Low Confidence — communicates the same information in the user's register, not the model's.
Decision 02
Detected Features as AI explainability
The model doesn't just name the fly — it tells you which visual features it detected and explains what each one means functionally. The user can verify that the identification makes sense, and learns something in the process.
Decision 03
Recommended Rig as decision support
The product doesn't stop at identification. It translates the ID into a tactical recommendation — how to fish the fly, at what depth, with which setup. Most field guide apps stop at the first. HatchMatch completes the job.
Decision 04
"Edit Name" as a humility affordance
The pencil icon under the fly name gives the user agency to correct the model. The AI is a starting point, not an authority — and experienced anglers name their own patterns.

Confidence at collection scale.

Flybox scan results screen

Showing confidence scores at the pattern level in an inventory scan was a deliberate choice to extend the confidence framework from single-item ID to collection analysis. Each pattern in the result carries its own confidence score, making the limitations of the scan visible and honest.

A user can see immediately that the system is 80% confident about the Nymph group but only 70% confident about the Beadhead Nymph count — and make decisions accordingly. The "est." labels on summary stats reinforce this: the product is honest about what it knows and what it's approximating.

"The flybox scan isn't just an inventory tool. It's a gap analysis — No other fly fishing app does this."

Every screen is a design decision.

Onboarding screen

Onboarding

Three value props. Three surfaces. Users arrive with a mental model already built.

Loading state

The Loading State

A 2–4 second Vision API call is long enough to lose someone. "Hatch Matching..." keeps them inside the product creating a sense of expectation.

Home Screen

Home Screen

One CTA. Full-bleed imagery. Four words that are the entire product promise: "Less guessing. More confidence."

Flybox screen

Flybox Library

Every scan saves. The app gets more valuable every time you use it.

What building this solo taught me.

Confidence communication is a design primitive
Every AI product has to answer what happens when the model isn't sure. Hiding uncertainty destroys trust when the user eventually notices. Surfacing it honestly, and giving the user something useful to do with it, builds a different kind of trust — one that survives imperfect results.
The job-to-be-done doesn't end at identification
A user pointing their camera at an insect isn't trying to know what it is — they're trying to catch more fish. The interface that stops at identification has done half the job. The one that continues through to recommendation, rig setup, and pairing suggestions has completed it.
Shipping alone compresses the learning curve
When there's no PM to make the call, no engineer to push back on a spec, no design review to catch blind spots — every decision is yours and every consequence is immediate. HatchMatch made me a better product designer in six months than any single team project I've worked on.
Domain expertise changes what you design
Being a fly fisherman — not just a designer studying fly fishermen — meant I understood which constraints were real and which were assumptions. The three-second patience window, the variable light, the one free hand. You can't design for a context you've never stood in.