Prototyping

Why some interactions cannot stay in Figma

Some ideas work well in frames. Others survive clickable prototypes. But a lot of interactions start to fall apart the moment you try to run them on a real device.

Timing changes. Gestures feel different. Motion that looked subtle suddenly feels distracting, or pointless. Figma is good at explaining structure, but it struggles to convey behavior.

It cannot tell you how long a delay feels, how a swipe settles, or whether an interaction feels calm, rushed, or annoying. Those things only show up once you compile and run the app.

For me, this is not about validating assumptions or proving ideas right. It is about understanding what something actually feels like when it lives in your hand, not on a canvas.


Prototyping as a way to experience the system

When I prototype in code, I am not trying to convince anyone. I am trying to experience the system as a user would.

I want to see how an interaction behaves when the app is under load, how it reacts to interruptions, and how it feels when you are tired, distracted, or in a hurry. That kind of feedback does not come from frames or timelines.

It comes from running the app on a phone and using it like you normally would.

This is where AI tools became genuinely useful to me. Not because they are impressive, but because they remove friction between an idea and something you can actually try.


The focused inbox

The focused inbox started with a simple question. What would it feel like to process one email at a time, inside an inbox full of noise?

The idea was a deliberate review mode. When you enter it, the first email takes over the screen and gives you enough context to make a decision.

You see a short summary of the message. If there are image attachments, you get a preview. If there is a PDF or text file, you see a short summary of that as well.

If the sender has never emailed you before, the app tells you immediately and lets you decide whether to block them or keep receiving messages from that person.

iPhone Frame

The interaction pattern is intentionally simple. Swipe left to archive. Swipe right to pin the email back to the inbox.

The goal is inbox zero in a way that feels calm and easy to consume.

Most email clients already support swipe actions like archive, pin, mark as read, or reply. What they do not offer, at least to my knowledge, is enough context to make that decision quickly. You always decide without really knowing what you are archiving.

This prototype was about exploring what happens when that blind spot goes away.


Why this had to run on device

This interaction would not work in Figma.

Its value lives in pacing, resistance, and timing. How long the summary takes to appear, how the swipe settles, and how quickly you feel confident enough to move on.

These are not visual problems. They are behavioral ones.

So I built a fully functional SwiftUI prototype with mocked data that runs on device and responds to real gestures. It feels real enough to judge.

What surprised me most was that I did not write the code myself. I described the idea, the interaction rules, the data I needed, and the edge cases I cared about. An LLM generated the prototype, and I refined it by prompting again.

All of this happened in a single afternoon. Not because the tool is special, but because the intent was clear.


What AI is actually useful for

AI is not good at judgment or taste. But it is good at turning clear intent into something runnable.

That changes how conversations happen.

For a long time, designers were told that certain ideas were too complex or would take multiple sprints to explore. Now it is possible to say, let me show you how it feels, without turning that into a commitment.

Not as a promise. As an experience.

That shift matters.


Micro-interactions still matter

I am currently working on a larger project at GitHub that I cannot write about in detail. A small part of it led to another prototype.

Copilot made its way to GitHub Mobile, and a lot of focus naturally went into capability and usefulness. What often gets lost in that process are the small moments that make a product feel present and considered.

I wanted to see what would happen if we brought some of the Copilot character animations from github.com/copilot into the GitHub Mobile chat experience.

iPhone Frame

When the chat view appears, Copilot jumps in with it. When you stop typing, it observes the input field.

Nothing loud. Nothing flashy.

When I shared this internally, people immediately understood what it added. They did not need a walkthrough or a pitch.

These moments are not vanity. They shape how products are remembered. People forget the pull request they merged a few days ago. They remember how a tool made them feel the first time they used it.

This prototype was also largely built with the help of an LLM, allowing me to focus on intent while the tool handled most of the mechanics.


From prototype to craft

Tempra Ora is another example of how ideas can move quickly with the help of AI. It is a native stopwatch app focused on time, rhythm, and focus.

The LLM generated most of the core logic and UI, probably around eighty percent. The remaining part took the most time and effort.

That last stretch is where the product becomes real.


Where human judgment still matters

Tempra Ora includes a feature called Clap Control, which lets you start or stop the timer with a clap or loud sound. Even though everything stays on device and no audio is recorded, this feature listens to the microphone.

That has implications.

So I added a transient card that appears when the app opens and clearly communicates that Clap Control is active. Users do not have to guess or dig through settings.

iPhone Frame

That decision was not about capability, but respect.

The same was true for the Twilight Swirl theme. It is a gradient that rotates a full 360 degrees over exactly one minute and then resets smoothly.

iPhone Frame

Getting that right meant tuning timing, easing, and transitions by hand. No model got that right without intervention.


Closing

It took me a while to find a place for AI in my design process.

Not because the tools were bad, but because I needed to understand what they were actually good at.

I still use FigJam to think through flows and structure. But once I have a clear sense of an interaction, I want to know how it feels.

That means code, running on a device.

AI helps shorten the distance between an idea and something you can experience. It does not replace craft, but it exposes where craft still matters most.

For native interactions and motion, that distance is where the real insights live.