Hunch

January 2023
AIHCI

An AI workflow orchestration platform that democratizes complex AI model interactions through a visual programming interface. By providing a spatial canvas for chaining AI models, it lets non-technical users create complex AI workflows without requiring coding expertise.

Hunch aims to simplify AI tool-building via a visual, no-code interface for chaining models. I briefly joined the project early on (2023) to contribute to the initial interaction design and explore the deeper potential for AI as a thinking partner, drawing on my background in systems design, tools for thought, and spatial interfaces.

Failed to load mediasrc: images/interface*.png

Problem

As AI models become more powerful, combining them into useful workflows often remains a technical challenge requiring coding expertise. How can non-programmers chain together different AI capabilities (like text generation, image analysis, data extraction) to create custom tools or automations? Hunch‘s bet was that a visual, spatial canvas could make orchestration legible enough to be usable without turning everyone into a developer.

My work (early 2023)

I joined for a short stretch in Hunch‘s early stage (2023), stepping in while my friend Ricardo Saavedra (who led design through early 2025) was on leave. The work was less “invent a new paradigm” and more “make the canvas readable when it gets real.”

Concretely, that meant:

AreaWhat I changedWhy it mattered
Block typesClarified block types so flows stay scannable as they growReduced cognitive friction on large canvases
TaxonomyTightened the taxonomy (what a block is and how it behaves)Made intent/behavior legible at a glance
Execution feedbackImproved execution feedback (state, progress, errors, outputs)Helped users keep a stable mental model while the system runs

Make the canvas scannable

One of the first challenges was helping users distinguish block types at a glance. I developed a modular color system for operations and restructured the block taxonomy so roles and capabilities were clearer (especially once flows became large and nested).

Failed to load mediasrc: images/interface-Context-Bar-redesign.png

Make execution legible

The other pressure point was state: what’s running, what’s waiting, what failed, what produced output. I redesigned the context bar and refined the execution feedback loop to expose essential state changes without turning the interface into a dashboard.

Tensions

We also explored a product-direction question that kept resurfacing: should Hunch be optimized for one-off automations, reusable “apps,” or something more emergent?

What Hunch chose to prioritize — modular tools, goal-oriented flows, user-defined macros — makes a lot of sense from a productization angle. But it leaves out a deeper layer of intelligence: systems that evolve. Feedback loops are how complex behavior emerges, how learning happens, how unexpected insights surface. Without them, you’re locked into linearity.

This divergence became formative for me. I was advocating for more open-ended collaboration between humans and AI, while the roadmap understandably focused on clearer, more market-ready outcomes like composable tooling. Both are valid paths — but they lead to different kinds of systems. Wrestling with that split directly informed the AI UX Paradigms talk I gave shortly after.

The path I hoped for

If Hunch had embraced feedback mechanisms from the beginning, it could have supported something closer to co-evolution between user and workflow: every execution becoming part of a growing pattern, not just an isolated run.

Failed to load mediasrc: images/feedback*.png

It would have opened the door to:

CapabilityWhat it supports
Persistent internal statesWorkflows that evolve across executions instead of resetting
Looping logicPattern refinement rather than one-shot runs
Exploratory modesOutcomes that steer future behavior
Fractal workflowsTools that generate tools, not just results

This wasn’t part of the product scope, but it anchored my thinking throughout.

Outcome (and how I use it)

My contributions (and Ricardo’s ongoing work) helped establish foundational UX patterns for the visual canvas: clarity, legibility, feedback. Hunch succeeds at making model chaining accessible through a no-code spatial interface.

Interestingly, I now use it primarily for the more open-ended exploration I was initially reaching for. The vast canvas, combined with free access to a wide range of models, makes it an excellent environment for developing thoughts visually, branching inquiries, and tracing the history of an idea spatially — uses perhaps not fully intended, but highlighting the power of the spatial paradigm itself.

While the deeper systemic ideas (feedback, adaptation) weren’t prioritized in the initial product, Hunch‘s success in democratizing model access and its flexible canvas demonstrate steps towards more fluid human-AI interaction.

The real influence — if any — was in challenging the team to consider what it means to build not just pipelines, but thinking environments.