New UI & UX in AI

James Buckhouse
4 min readAug 1, 2024

An ever-updating collection of new approaches to UI/UX in the era of AI. If you see something in the wild… please lmk and I’ll add it to this doc and credit you (of course!) for mentioning it!

Dials, Knobs, and Sliders

Physical knobs, digital sliders, and quadrant “dials” can be used to adjust the tone of the response or other input variables for AI interactions.

This first novel UX idea comes from the generative text feature in Figma’s new Figma Slides product. You can slide the orange indicator around to blend the tone from casual to professional and from concise to expanded.

This example is from a post by Twitter user Johannes Stelzer. I do not know the details, but loved this demo.

Here’s my own little AI project… a Chrome extension that summarizes any web page. You can adjust the output with sliders. Want a four-word story? Easy: Adjust it to just 4 words. Want an essay? Go long.

The Node-n-Edge Graph

LangGraph Studio uses a node and edge graph system to visualize the logic and flow of the LangChain Agents. Inside each Agent is a collection of nodes and edges. Each node is a “micro-agent” that does a task and connects to other nodes to make a system or a flow. In this way, you can build complicated agents from basic building blocks.

The Infinite Canvas

An infinite canvas give you an open space to build and create as you go. Figma, FigJam, TLDRAW, and Visual Electric are great examples. Here is a look at Visual Electric’s canvas.

Figma and FigJam also have “infinite” canvases. The working environment lends itself to expansive creativity. Here is Julie W. Design jamming on the infinite canvas with FigJam AI.

Here is an example of Claude’s Artifacts and TLDRAW’s infinite canvas uniting to make a near instant site builder.

Voice Input

Our digital experiences have had voice input before, but AI’s mental model of a “chat interface” makes voice input feel natural. This is not the only mental model we can have for AI, but it is the one that gained traction with OpenAI’s ChatGPT. (h/t David Lam)

And here is Dot by New.Computer. In addition to typing in your chats… there is an elegant voice input option. If you look in the back left of this image… you can just see a screen with a card view. It’s not an ordinary card view, however, you get there by “pinching out” from the main chat. This is a wild, very very cool and fluid experience that proves why Jason Yuan is one of the best of the best.

Visual Interface

AI-tools are also able to process visual input. Here is OpenInterpreter looking through a webcam to see Killian holding up a sticky note, and then proceeding to connect to wifi.

The side-by-side

In this approach, you chat on the left and get results on the right. This pattern works on desktop, but requires tabs or swipes or something else for mobile. Here are a few examples.

Here’s Layla’s travel planning AI. You chat away about your hopes and dreams for a vacation in the left window. Results, links, videos and offers appear on the right, along with your itinerary.

Side-by-side at Layla’s AI travel planning app

Here’s Claude’s side-by-side UX for when you are asking Claude to help you code something. There’s a standard chat window on the left and a results page on the right. In this case, the results have a toggle at the top so we can see the code and the results of the code.

Claude Sonnet 3.5 with artifacts turned on…

Whole Body Interface

This video shows media pipe to track your gestures and then create results. Is this AI? Well… yes… there is ML in there. Mostly, though, how cool!

AI Chat Comments

What if the comment rail in a doc was a chat input for AI? Or Arbel shows us how here:

Stay tuned! I’ll add more examples as they come in… and if you see something interesting, please lmk so I can add it to the list.

--

--

James Buckhouse

Design Partner at Sequoia, Founder of Sequoia Design Lab. Past: Twitter, Dreamworks. Guest lecturer at Stanford GSB/d.school & Harvard GSD jamesbuckhouse.com