Case Study — Enterprise AI Platform

Tenjin.
Making AI legible
at the DoD.

I was SAIC's first UX hire. No design function, no system, no process. I built the design language for an AI platform used across the Department of Defense — from a blank Figma file.

Role
Sole Designer · Founding UX Hire
Team
SAIC Innovation Factory
Stack
React · Figma · Storybook
Timeline
2021 – 2025
Status
Deployed across DOD, DOJ, and federal agency programs
Tenjin Main Dashboard

The AI worked.
Nobody could use it.

SAIC's Innovation Factory had built powerful AI capabilities — NLP pipelines, computer vision models, data fusion tools — that almost nobody outside the data science team could actually use. The technology existed. The gap was translation.

When I joined in 2021, there was no design function, no design system, no component library, no process, and no documentation. The team was shipping AI-powered tools directly to DOD operators and analysts who weren't data scientists — and the gap between what the technology could do and what users could accomplish with it was enormous.

"The problem wasn't the AI. The AI worked. The problem was turning sophisticated, interconnected machine learning capabilities into something a non-technical analyst could deploy, configure, and trust in a mission-critical environment."

Two users.
One platform.

Tenjin served two fundamentally different user types with different mental models, different vocabularies, and different relationships to the underlying technology. Designing a single platform that served both without alienating either was the central UX challenge.

User Type 01
Data Scientists & ML Engineers
The people who built the AI capabilities and understood them deeply. They needed power, configurability, and technical precision. They wanted to see the internals.
User Type 02
Domain Analysts & Mission Operators
People who needed AI to answer specific questions and drive specific decisions. They didn't know what a transformer model was — nor should they have to. They needed to know: can this help me, and how?

Only once I understood both of them completely — their vocabulary, their mental models, their relationship to the technology — could I design an interface that didn't make either one feel like a second-class user. That's not a UX insight. That's just the work.

Design the system,
not the screens.

Before designing a single product screen, I made a decision that shaped everything that followed: I would not design individual features. I would design the system that generated features.

The Innovation Factory was running multiple parallel programs — Tenjin, Advana, CDM — each with different product teams and engineering leads. Without a shared design foundation, each program would drift visually and behaviorally, creating a fragmented experience across tools that were supposed to feel like a unified platform.

I built a Figma-based design system from scratch: component library, typography scale, color system, spacing tokens, usage documentation, and a Storybook integration that gave engineers living reference for every component. This wasn't a UI kit — it was the shared language that let a 250+ person team ship consistently at scale without requiring a designer in every room.

"The design system became the platform's institutional memory for user-facing decisions. When new team members joined — and at 250+ people, they joined frequently — it was their onboarding guide for the product's design philosophy."

The Dashboard.
Two audiences, one entry point.

Tenjin Main Dashboard
Decision 01
Layered messaging for dual audiences
The hero line — "no-code/low-code solution to design, deploy, and manage AI applications" — speaks to both audiences simultaneously. "No-code/low-code" signals accessibility to analysts. "Design, deploy, and manage AI applications" signals capability to engineers. Neither user feels like they landed in the wrong place.
Decision 02
Three distinct entry paths
Training, IDM, and Datasets create separate lanes so each user type can navigate directly to their context. An analyst goes to IDM. An engineer goes to Training. Nobody has to wade through the other's workflow to find their own.
Decision 03
The right-rail as passive discovery
The Accelerator Store preview in the right rail isn't decoration — it's a persistent capability browser. An analyst who's never thought about what NLP could do for their workflow can see "Natural Language Processing" at a glance and start asking questions. Discovery without friction.

The Accelerator Store.
Making AI capabilities browsable.

Tenjin Accelerator Store
Decision 01
Treat AI like an app store
The central design problem: how do you make machine learning capabilities feel approachable to someone who has never installed a model? The answer was to treat AI accelerators exactly like software you'd browse — plain-English names, clear descriptions, usage social proof ("Used by 4.2k projects"), and explicit install states.
Decision 02
Three columns teach a mental model
The column structure — NLP, Computer Vision, Data Fusion — doesn't just organize capabilities. It teaches users how to think about AI modularity. NLP does language things. CV does image things. Data Fusion does analysis things. Presenting capabilities this way makes users more capable over time, not just in the moment.
Decision 03
Surface what requires purchase, don't hide it
Rather than hiding capabilities that require procurement, the interface surfaces them with a clear "Purchase Required" label. This gives analysts visibility into what's possible so they can advocate for it — and gives program managers a shopping list for capability expansion. Honesty as a product feature.

The IDM Solution Page.
Making the process visible.

Intelligent Document Management solution page
Decision 01
Show the pipeline, don't hide it
The temptation in enterprise AI UX is to hide complexity — show only the output and let users trust it as magic. I chose the opposite: show exactly what happens to a document from upload through to the final report, with each processing step named and connected. Demystifying the process builds confidence in the output.
Decision 02
One diagram, two audiences
The pipeline diagram serves both users simultaneously. For analysts: it explains how the output was produced, so results don't feel arbitrary. For engineers: it maps directly to the module architecture they're building with. Same visual, two different layers of meaning.
Decision 03
The right rail closes the product loop
The NLP module list connects the solution diagram back to the Accelerator Store — each pipeline step links to an installable module. Browse capabilities → see how they combine → deploy solutions → understand outputs. A coherent loop, not a collection of disconnected screens.

Every screen is a
system decision.

Admin Dashboard

Admin Dashboard

Tool management for program administrators — toggle integrations like Koverse, Databricks, and Tableau on or off, add custom tools, and manage user access via Keycloak. The same design language scales from analyst-facing workflows to admin-facing controls.

Datasets browser

Datasets

A searchable, tag-filtered library of datasets available across the platform. Filter chips (My Datasets, News, Models) reduce scope without hiding breadth. The grid layout treats data assets the same way the Accelerator Store treats AI capabilities — browsable, labeled, and ready to act on.

Not a deliverable.
A product.

By the time Tenjin was deployed across programs, the design system had become the connective tissue for the entire Innovation Factory's product surface.

Outcome 01
Consistent cross-program identity
Tenjin, Advana, and CDM all drew from the same component library — creating a coherent visual language across programs that would otherwise have diverged entirely.
Outcome 02
Faster engineering delivery
Storybook integration meant engineers had living documentation for every component. Design-to-build cycles shortened significantly because there were fewer "what does this state look like?" questions.
Outcome 03
Team-wide onboarding guide
Usage documentation became the institutional record for why certain interaction patterns worked the way they did. At 250+ people joining frequently, the design system was how new team members learned the product's design philosophy.

What building this
taught me.

Design systems are products, not deliverables
A component library that gets handed off and never maintained isn't a design system — it's a snapshot. Building something 250 people actually use requires thinking about governance, documentation, and onboarding as core product features.
Approachable ≠ dumbed down
The Accelerator Store didn't simplify what NLP or Computer Vision was — it made those capabilities navigable. There's a meaningful difference between reducing complexity and hiding it. Users who understand what they're working with make better decisions.
The founding designer role is a product strategy role
With no prior design function to defer to, every decision about what to build, how to frame it, and what to prioritize was mine. That's not a design execution job — it's product leadership done through design artifacts.
The dual-audience tension is never fully resolved
The dashboard layering worked, but deeper in the product — particularly in configuration flows — the interface sometimes tried to serve both audiences simultaneously and served neither well. More explicit user-type routing at onboarding would have reduced the friction I observed in research sessions.