Journal
I Used AI to Audit My UX. It Can't Feel Frustration.
I’ve been building projects with Claude Code for the past few months. Many of them started without a plan. I’d describe what I wanted, prompt “build a better version,” and let it run.
After a few days on each project, the UX would collapse. Long lists, cluttered menus, low readability. Interfaces that work but make you want to close the tab. If you’ve built apps fast with AI, you know the pattern.
Yesterday, I tried a different approach.
Cowork Mode as a UX Auditor
I gave Claude Code’s Cowork mode access to the project folder, described my ideal user behavior, and asked it to audit the current UX. Find the gaps. Suggest restructures. Flag improvements.
I also asked it to open the app in the browser, walk the full flow, and test like a real user would.
The audit covered four areas:
| Task | Result |
|---|---|
| Identify UX gaps in the current flow | Flagged layout issues, redundant elements, unclear navigation |
| Suggest restructures for readability | Proposed grouping, hierarchy changes, component reordering |
| Flag improvements | Listed accessibility fixes, mobile responsiveness issues, contrast problems |
| Walk through the app as a user | Tested click paths, noted dead ends, checked state transitions |
The audit was thorough and logical. Each suggestion made sense on paper.
The result was okay. Not good enough.
Cowork gave me the most logical solution. I needed the most intuitive one.
The Emotional Layer AI Misses
Cowork can’t feel the frustration of a confused user. It can’t sense when something feels wrong before you can name why. It can’t tell you that a button placement follows the rules but pushes users away.
Those senses belong to humans. Token prediction doesn’t replicate them.
| Capability | AI (Cowork) | Human Designer |
|---|---|---|
| Structural analysis | Catches hierarchy issues fast | Slower and less systematic |
| Accessibility flags | Thorough: contrast, labels, ARIA roles | Missed unless trained for it |
| Flow testing | Walks paths without fatigue | Skips edge cases, gets bored |
| Emotional resonance | Zero. Can’t feel confusion or frustration | Where the real UX lives |
| ”Something feels off” instinct | Not possible | The most valuable design sense |
| Visual harmony | Follows rules | Knows when to break them |
AI gives you the most logical answer. You give the most intuitive one. Good UX needs both.
I’ve spent years building products and shipping apps. The gap between “works right” and “feels right” is where most products win or lose. Cowork closes the first gap. The second one belongs to you.
200,000 Neurons Playing Doom
This connects to something I saw last week that seemed unrelated.
Cortical Labs grew 200,000 living human neurons on a chip and got them to play the original 1993 Doom.
The process:
- Visual input from the game converts into electrical signals
- The neurons receive those signals on the chip and process them through firing patterns
- The firing patterns generate output signals mapped to movement and shooting controls
- The game responds, and the loop repeats
Game Visuals
↓
Electrical Signals → Neuron Chip (200K human neurons)
↓
Firing Patterns → Movement + Shooting Controls
↓
Game Responds → Loop Repeats
The neurons play like a beginner who has never touched a computer. But they learn. They form patterns. They improve through biological processes no researcher has mapped yet.
Imagine giving AI a biological layer like that. Not token probability matching, but organic processing that adapts the way living tissue does.
I don’t know how close we are to that. But a dish of neurons learning to navigate a 3D environment and shoot demons makes the distance between silicon logic and human sense look shorter than it did last year.
Building With Both Layers
The best workflow I’ve found:
- Use AI for the structural audit because it catches what humans miss through fatigue and familiarity
- Use your own instinct for the emotional layer because no model simulates that yet
- Combine both and you get UX that is sound in logic and right in feel
| Layer | Who Handles It | Why |
|---|---|---|
| Structure, hierarchy, flow | AI (Cowork) | Systematic, tireless, catches gaps humans skip |
| Feeling, intuition, emotional weight | You | No model replicates the “this feels wrong” instinct |
| Final product | Both, in sequence | AI audits first, you refine with gut sense |
The Systems Before Tools philosophy applies. The system is your design process. AI is one tool inside it. A capable tool. Still a tool.
Let AI audit the logic. Keep the feeling for yourself. The best products come from running both layers.
The Feeling Stays Ours
I’ll keep using Cowork as my UX auditor. It catches structural problems, hierarchy gaps, and broken flows better than I can on a tired Tuesday afternoon.
The things it misses, the subtle frustration, the moment where a layout pushes users away despite following the rules, those I have to catch myself.
Maybe Cortical Labs’ neurons will help close that gap one day. Until then, I’m Shahab Papoon, and I build with AI while trusting my own sense on the parts it can’t reach.
Keep Reading
- I Can’t Code. I Made 6 Working Apps in 30 Days., the full journey of building products with AI
- Why the Best AI Agent Builders Are Not Developers, why operational thinking matters more than code
- Systems Before Tools, the framework behind how I build