As a start-up, your biggest asset is speed.
I’ve used a new technique that I’ve called prompt wireframing where I use a combination of tools to get the job done.
There is this moment in a project when it’s unclear where things are going. There are designs somewhere. There’s a user story spreadsheet somewhere else. There are discussions happening, but there is no clarity about what should be built.
The designer’s way to solve this roadblock is to visualize it. Make a design. But how do you visualize something quickly? What if there’s many medium to high complexity UI parts, that have to come together?
Of course you can try to find the UI kit and draw the interface. At this point, you might be slow for certain UI elements and some complex interfaces.
You can start sketching, but that misses fidelity. Whatever you are drawing cannot be repeated. If you have a new idea, you basically have to draw a new sketch. You easily end up with 15-30 different sketches and now you’re losing track of which paper is for what.
I have a new technique that I’m pretty excited about: using LLMs to generate UI artifacts to then combine them into wireframes.
The way I would do this, is to start new Claude project and give context within the project. For example, what I put for a recent project is that users of this app that we are describing are Belgian, but that the UI is in English.
I use a dictation app called Wispr Flow. The reason to use this app is that I can talk faster than I can type.
First of all, you kind of need somewhat of an of what you’re going to build. It helps to have (some) sketches. With the sketches in front of me, I start describing the sketch by talking.
By holding down the Fn key on my Mac, I make sure that Wispr Flow is listening to my dictation.
I try to be literal in describing the sketch eg. this sketch shows a heading, a paragraph, an input with a value of ‘Enter your email’, a button that says ‘Subscribe now’ etc.
Whay Claude is essentially generating are pieces of UI that use React, shadcn/ui for the UI part, Tailwind (ugh) and Lucid for the icons.
The technical aspects behind it are actually not important. I would screenshot whatever is shown in the UI and use it as part of my wireframe. My process is not to actually use the generated code, but to take screenshots of the generated code and then sometimes use the screenshots to revisit the design of a screen.
(Although what you can also do is download the .tsx
files and drop it into a Next.js project with some light edits. Then you’re on the way to make an interactive prototype)
Because the artifact is contained in a Claude conversation, you can come back to edit it in the appropriate Claude conversation. As I discover problems with the user flow by visualizing it in Figma, I would then go back to the individual conversations (usually about one screen) and re-prompt to add something, remove something, or change something.
I would composite the the screenshots into Figma and draw arrows between the screenshots to make sure that the user flow makes sense.
If you use this technique, you will quickly run out of tokens, so you might want to pay for Claude Pro, which is about 20 euros a month.
In a way this feels like a new discovery to me, but it also feels incredibly obvious given the tools we have today. Has anyone else been doing this? Do you have any variations on this technique? I’d love to find out.