Prototyping with Natural Language
How vibe coding finally makes design tools learn our language instead of the other way around.
Published
Jul 23, 2025
The Translation Layer
Design tools have always forced users to translate their ideas into the tool's vocabulary. Starting with HyperCard in 1987, which was revolutionary for letting anyone create interactive experiences, users still had to understand stacks, cards, and scripts.
Despite decades of advancement, the problem has persisted: users must adapt to the tool instead of the tool adapting to them. Each new generation just created its own vocabulary. InVision introduced hotspots and transitions. Axure brought dynamic panels and conditional logic. Principle added timeline-based animations. Framer Classic required CoffeeScript programming.
Each tool created its own version of what we can call the translation layer—the gap between how you think about an interaction and how you have to construct it in the tool.

Create UX prototypes with unlimited combinations of event triggers, conditions, and actions to truly explore digital experiences.
Source: axure.com
For most prototyping tools, this meant a steep learning curve and time investment that prevented wide adoption. Designers stuck with InVision's simple paradigm—now built into tools like Figma and Sketch—because adopting a new approach meant disrupting established workflows and learning how to speak its language.
Getting Lost in Translation
The translation problem becomes clearer when you see how the same design idea gets forced into different tool vocabularies.
Consider a common interaction: a card that expands to show more details.
The Design Intent: A card that expands to show more details when tapped, with other cards gently moving out of the way.
InVision Translation: Create two screens, "card collapsed" and "card expanded." Draw a hotspot area over the card and link it to the expanded screen. The smooth expansion becomes an abrupt screen replacement between static layouts.
Figma Translation: Create a "Card" component with two variants, "collapsed" and "expanded." Design both states within the component, set up auto layout stacks to handle repositioning, then add an "On click" interaction to change from one variant to the other with smart animate easing of 300ms. The fluid expansion becomes component state management and timing configurations.
Axure Translation: Set up dynamic panels for each card state. Create conditional logic "If Card A is expanded, hide panels B and C, show panel A-expanded." Define show/hide actions with movement animations. The fluid interaction becomes a series of if/then statements and visibility toggles.
Principle Translation: Design your collapsed state on the first artboard, expanded state on the second. Create timeline animations between keyframes. Animate each card's position and size properties over time. The responsive, physics-based movement becomes manually keyframed motion graphics.
Framer Classic Translation: Write CoffeeScript code to define each card as an object, then program click handlers and animation properties. Set up coordinate-based animations for each card's position and size changes over time. The spatial relationships become coordinate math and property animations.
The Cost of Translation: Each tool forces designers to translate their intent into the tool's vocabulary: state management, timeline animation, conditional logic, or coordinate programming. This creates unnecessary friction between idea and execution. It also limits creative discovery—the kind that happens when you can quickly test and refine ideas without technical barriers.
Learning Our Language
The translation layer problem has persisted for decades because, despite each tool's innovations, we've been approaching it backwards. Instead of making tools more intuitive, we've made them more sophisticated—requiring designers to master increasingly complex paradigms.
But what if we flipped this entirely? What if instead of learning the tool's language, the tool learned ours?
That's exactly what vibe coding does—using natural language prompts to generate interactive code, describing your intent rather than learning the tool's vocabulary.
Let's see how this works with our card expansion example:
The Design Intent: A card that expands to show more details when tapped, with other cards gently moving out of the way.
With Vibe Coding: Using tools like Figma Make, you just describe the interaction in natural language: "When I tap a card, it should expand to show more details, and the other cards should gently move out of the way to make room." The AI understands the spatial relationships and responsive behavior you're describing and generates the appropriate code—whether that's CSS animations, React state management, or physics-based transitions—without you needing to think in those paradigms.

Introducing Figma Make: A new way to test, edit, and prompt designs
Source: figma.com/blog/introducing-figma-make
Reality Check
Are we simply replacing one translation layer with another?
There's truth to this concern, but there's a crucial difference in where the translation happens. When designers envision interactions, they naturally think in spatial and temporal terms—exactly the language vibe coding systems are learning to understand. Instead of forcing designers to adapt to tools, tools are finally adapting to designers.
The Path Forward
Figma Make represents a genuine step toward tools that learn our language, but it's not perfect. While you get immediate feedback, effectively communicating your design intent can still require several attempts as the AI interprets your meaning.
But these feel like implementation challenges rather than fundamental flaws. The direction feels right—designers should describe what they want rather than learn how to construct it.
For now, Make offers something previous tools couldn't: a way to prototype that starts with your thinking, not with the tool's constraints. That alone makes it worth exploring, even as these tools continue to learn our language.