How modular synthesizers and music software inspired Miro's Flows
This week, Miro's new AI feature, Flows, is going to public beta as part of the broader AI Canvas launch I've been leading for the last year. I am very proud of what the team has built together. I believe it represents a novel way of interacting with AI that will unlock a lot of creative force. On the verge of our broader launch, I want to share a bit about where the idea came from.
Like many ideas, it came from cross pollination from an outside discipline. To tell the story, I need to go deep on one of my very favorite topics: Machine music and modular synthesis.
Traditional Music Software
For the last 60 years or so, there have been two ways to think about making music with a computer.
The first translates traditional composition to a computer screen. This software lineage runs from handwritten scores, to tape machines, to early tools like Fairlight CMI ,the first versions of Cubase, and "trackers" like Protracker, to modern digital audio workstations like Ableton Live, Logic, Garageband, and FL Studio. This is how most recorded music (electronic or otherwise) gets made these days. You record real audio, or write "digital sheet music" (MIDI) performed by digital instruments. Stack up layers of sounds, recorded and composed linearly, arranged on a timeline, mixed together, and eventually "bounce" out an audio file that ends up on Spotify or other distribution platforms. A human author specifies what happens and when, and the composition is fully deterministic.
There's been plenty of innovation within this model, and its development is itself a fascinating history. It's gone through its own revolutions, even in just the past few years. Ableton Live rebuilt the traditional linear workflow around loops, tempo matching, and timestretching, among other advances. An entire ecosystem of "plugins" (digital instruments and effects designed to integrate with these tools) expanded sonic possibilities. Countless UI innovations emerged— piano rolls, graphical step sequencing UIs, chord-based sequencers and control surfaces, and many more. I spent years at Splice building a content ecosystem designed to make it easier to layer in loops and samples.
Despite the tech evolution, it's still composition in the classical sense. Producers add layers of deterministically authored parts, performed by digital instruments or real recorded humans, even if the process is faster and more fluid than notation on staff paper. Even the latest wave of AI powered music startups are constrained to this path, in that their result is ultimately a static audio file. You can't do much to control it, except to prompt it again to make something new.
But there's another branch of the tree. Starting in the mid-1960s, a different way of making music became possible. Like many developments in music history, it was a product of new technology.
Discovering Generative Music
About 11 years ago, I went to see the "David Bowie Is" exhibition in Chicago. There was a section about Bowie and Brian Eno's experiments with generating lyrics through cut-up slips of paper and custom software, and a brief plaque describing Eno's experiments with "machine music", an idea I had never encountered before. That was my entry point into discovering Eno's work, and through him, the whole field of generative and ambient music.
"Machine music" was unlocked by the invention of the synthesizer. If you're not a modular synth head, you may think of a synthesizer as something with a keyboard and a bunch of knobs on it, and probably have some kind of 80s-casette-tape tinged idea of its sound, maybe someone with big hair playing it. And that is a big part of what's great about synthesizers! But a stricter definition would be: a synthesizer is a machine that makes sounds according to rules.
And once that machine exists, the rules can be whatever you want. This insight, discovered by some of the synthesizer's earliest inventors (Buchla, Moog, Pearlman, many others), led to the emergence of a new type of music, where you design creative rules for music machines to operate. Rather than composing a specific piece and drawing in notes, you're designing a system. You define a set of rules and relationships that generate music spontaneously. The composition is the system that plays it.
As a result, your role as a composer is different. Instead of writing individual notes, you're thinking about structures, behaviors, patterns, and logic, and strategically evolving them over time. You don't write a specific drum pattern. Instead, you create a clock at a certain speed for the piece, and say that a kick drum should "happen" on every 4th pulse, a snare on every 8th pulse. Maybe we add hi-hat cymbal on every single pulse but only 50% of the time, and actually maybe that probability should slowly ramp up to 80%, but only while the synth is playing .Whether the synth is playing is itself a weighted coin flip that happens every 64th beat. And the weighting of that coin flip fluctuates up and down on a very slow, 3-minute long oscillating cycle that shifts the pitch up and down as well…
This allows you to work at a higher level of abstraction, thinking about moods, shapes, and textures, translating them into cascading logic that creates music on the fly. Of course, many talented composers think this way, even when writing things the old fashioned way. But in a modular environment it is the primary workflow, with direct control over these abstractions and knobs to adjust them. It encourages a different type of thinking and consequently a different kind of music.
One important consequence, and a novel creative possibility, is that this encourages you to give up direct control over the composition. By introducing controlled randomness into the system, every performance becomes unique. This is critical to create variation, and it helps generative compositions sound organic and consistently interesting. It's not pure chaos; you regulate the randomness with constraints and quantizers. This creative push and pull with unpredictability is a fundamental creative loop when patching a modular synth.
There is a creative joy to this. You can shift your role from author to explorer. You're in a feedback loop with the system, listening to what it produces, tweaking parameters, and curating the most interesting moments. In a co-creative dialogue, you design, the system responds, and somewhere in that exchange, ideas emerge that neither of you could have produced alone.
The Interface Problem
This is about as abstract as creative work can get. Music interfaces have always had to deal with abstraction . You can't see sound, so everything is fundamentally a visualization of data. Waveforms, piano rolls, spectrograms, and VU meters are all analogies for something invisible. Using space, color, size, or positioning, they communicate something you can't see normally via metaphor.
But in modular environments the challenge runs even deeper, because you are trying to manipulate the data itself more directly. The deep truth of modular synthesis and generative composition is that ultimately everything is data. How dense a pattern is, how loud a sound is, how aggressive a timbre feels, whether an instrument is playing at all can all be represented as numbers. Or in the delightfully analog world of Eurorack synthesizers, everything is control voltage. Anything can be a signal, anything can control anything else. Practitioners return to this insight again and again: all recorded music is voltage over time.
When you're designing systems at this level of abstraction, where signals feed back into each other, where control data and audio data flow through complex interconnected networks, you desperately need tools that help you see what's happening. In the world of analog modular, these connections are literally manifested in patch cables. You take an output, plug it into an input, and the physical wire carries the signal. Change the cable and you change the flow. Unpatched systems make no sound at all, since there's no cable to carry the signal to the output.
Just as the traditional world of composition was translated to software in the 80s and 90s, so too was modular synthesis. But with a different workflow, different goals, and different visualization challenges, this branch of music making followed a different evolutionary path of interface design. Propellorhead's Reason, VCV Rack, and NI's Reaktor have leaned into skeuomorphism and literally represented patch cables.
Open-source Pure Data ("PD") and its commercialized offshoot MaxMSP went more abstract, and took the "everything is data" even further, opening up cross modulation with video and other types of data. (Pure Data is so "pure" that it can be complied to C.)
More recently, Bitwig's Grid attempted a more user friendly take that bridged to traditional timeline-based DAWs. I'll spare you further digression on the deeper-cut picks (Reaktor, Kyma, Usine, Flowstone, Nord Modular G2 Editor…) but suffice to say that there are dozens of us! Dozens!
Each tool makes its own creative choices, and carries its own strengths and limitations. But they all share the same core concept of patching connections to route data around a complex system. I like to think of this as convergent evolution, much like sharks and dolphins both independently developing fins to adapt to an aquatic environment. While makers of these tools certainly inspired each other, design choices also emerged through decades of practitioners discovering what actually works when you're orchestrating abstract systems. The patch cable makes invisible data flow visible, and the nodes encapsulate useful functions. The tool is a canvas to connect ideas and watch them interact.
What does this have to do with Miro?
If you came in more interested in Miro than an extended digression of the roots of experimental techno, I owe you a segue.
Like most SaaS product teams in 2023 post-ChatGPT, we added a conversational agent to our product. But Miro's unique product surface created unique challenges. Our data is visual, not inherently text based. Outside data flows into the product via embeds. Many widgets and formats — especially tables — are a view for much deeper data, sometimes contained on another board. And on top of it all, because AI interactions work best when they can use outside knowledge and context, we needed new ways to visualize external data connections as well.
Miro users are shifting from being direct authors of every idea — writing every sticky note, crafting every diagram by hand — to becoming curators. With AI in the mix, they're suddenly dealing with abstract data flow, reacting to generated content, and selecting the best outputs in a discovery loop. More and more, our interface is visualizing the flow of information between AI processes. We have inputs being transformed into outputs, context being read from the canvas, artifacts being created. Suddenly we are confronting the same need to visualize data flow that node-based music tools responded to decades ago.
The Moment
Last summer, we were working on what seemed like a fairly narrow problem. We'd already built the first version of Sidekicks, Miro's conversational AI, and we were trying to figure out how to put AI interactions into templates. Templates are crucial for feature discovery in Miro because they help us put our features in the context of real world scenarios and use cases. We needed a way to "templatize" AI to deliver specific AI interactions that could be repeated again and again, without the user having to come up with complicated prompts on their own.
We'd created a button that could trigger an AI conversation with a starter prompt, but it had serious limitations. Useful AI interactions need specific context: maybe the sticky notes in a particular frame, or votes on ideas, or a diagram as input. And they need to produce something specific: a doc, a table, an artifact you can share with colleagues.
In our first attempt, we built configuration menus. You'd open the button settings, choose a source frame from a dropdown, and select a target location for the output. It worked, technically, but it felt completely wrong. You were constantly jumping between the menu and the canvas, selecting things abstractly that were sitting right there visually.
And then one night I sat up in bed (literally!) and realized: I want to configure this by drawing it. And from years of experience playing around with synths, I knew exactly what to do.
The inputs and outputs shouldn't be dropdown selections . They should be visible connections, "cables" on the canvas. I wanted to see the flow, to make the invisible visible. I wanted to plug inputs into outputs, turn knobs and dials. I opened my laptop, threw together a sketch of the idea in a Miro board to remember, and went back to sleep. I woke up buzzing with excitement, went into the office, and fired off a demo video of the idea. You can check it out for yourself right here. I called it "Board Brain" at the time. Many thanks to the Miro PMM minds that gave it a better name :)
9 months later, we launched it on the stage at Miro's Canvas '25 in New York City. On my walk to Domino Sugar Factory, I walked right past the old Glasslands space (RIP) where I had performed years before, with a janky SM-57, a cheap audio interface, and a beat up laptop running a Pure Data patch for generative visuals. Now the same ideas, planted a decade before, powered a major Miro launch.
Four reflections
1. It Takes a Team
The initial insight was just that — an initial insight. From the moment we started using the proof of concept, it was clear things needed to change. What Flows became is the product of a team iterating relentlessly on the idea. I owe tremendous debt to the team that took the baton, in particular the brilliant design and product leads Ahmed Genaidy and Damir Disdarevic. Working in tight iteration loops with Ahmed, Damir, and our all-star engineering team on this has been a highlight of my career.
The visualization of connections evolved. We moved away from instruction blocks towards inline prompting and simpler flows. A top-to-bottom overhaul of the UX started as a hackathon project from passionate engineers over the summer. How users build flows, how they're displayed, how they execute, has all been refined and evolved. The consistent theme was simplification. We're not building for soldering-iron-wielding synth nerds, after all.
Even after many rounds, what exists today is just the beginning. We're learning with high speed from what our first customers are building, the limitations they're hitting, and the possibilities we never could have imagined when we started.
The team has pushed the idea far beyond the initial seed, into a complex, enterprise ready workflow orchestration tool. We're seeing people build boards and Flows that go way beyond what I imagined in my first pitch.
Creativity benefits from collaboration. The best ideas come from cross-pollination and iteration. The feature that helps teams work this way was itself built this way, through a process of co-creaton and collaboration. This sort of recursion is something I've gotten used to on the AI team at Miro. As we transform our ways of working, the tool evolves, and as the tool evolves, so does our way of working. We are in a co-creative loop with the team, our customers, and with the tech itself.
Miro isn't alone in arriving at this paradigm. Tools like n8n, Flora, Hunch, Weavy, and others have independently developed node-based approaches to AI workflows, and lines are blurring with earlier generation automation tools like Zapier and IFTTT. I find that validating rather than threatening. I believe we're not just part of a trend — we've independently converged on a design solution because it's structurally right for the problem. Just as node-based interfaces emerged across different music tools because the problem demanded it, the same is happening now for AI orchestration.
2. What Makes Miro Different
If this is becoming a category, what's distinct about how Miro does it?
First, Miro is where ideas already live. We don't just pull data from external systems. Our home turf is messy early-stage thinking that's native to the canvas. Brainstorm sticky notes, rough diagrams, a napkin sketch, user quotes in stickies are our raw material.
Second, Miro is collaborative by default. Creativity is social. You're building on teammates' contributions, riffing and yes-anding, synthesizing different perspectives. We know from years of building Miro that this is how good ideas develop. Bringing AI into a collaborative environment makes a big difference in quality of results achieved.
Third, there's an immediacy and tangibility to Miro that goes beyond typical AI workflow tools. Our whiteboard lineage started in the physical world! Mnay of our design choices optimize for data tactility and manipulability. In modular synthesizers, the magic isn't just in the wires, but also the knobs. You can reach over, tweak a parameter while the system is running, hear the change instantly.
In Miro, your inputs and outputs are right there on the canvas. You can edit the sticky notes, rearrange the diagram to create a different hierarchy, sort or group a table, change the dates in a timeline. You're not shipping data off to an external system and waiting for results. The abstract workflow and the manipulable content exist on the same surface.
3. Unpredictability, Collaboration, and Creative Surrender
In modular setups, sources of randomness are some of the most important elements. White noise, unsynced LFOs, electrodes stuck to a plant, a live data feed from the S&P 500 — all of this and more has been used to inject life into generative compositions. This creates spontaneity and perceived humanity in what would otherwise be mechanical systems. Determinism sounds sterile; unpredictability can help it breathe. You have an incredible ear for repetition, even if you're not musical. Disrupting this is key to create interest.
You don't fully control what AI generates, no matter how specific your prompting. In a collaborative environment, your teammates add their own kind of unexpected (but ultimately invaluable) chaos: ideas you didn't expect, directions you wouldn't have taken.
One of the core design rules of my modular setup—ModularGrid link for the nerds—is no direct sequencing. There are plenty of modules to program in melodies, drum steps, and so on, but I don't use them, by design. I find that "writing brain" is different than "explorer brain", and I've specifically designed the system to spend as much time in the latter mode as possible.
I believe there's a similar dynamic to creative collaboration. "Command and control" bulldozes past quiet ideas that emerge in a curated garden but not an architected skyscraper.
4. Why Customers Respond
One of the most interesting aspects of this product to me is how immediate and enthusiastic the customer response has been. This initiative acquired hurricane force within Miro, in part because everyone we showed it to had a sort of lightbulb moment with AI. When I reflect on this, I think about how the chat interface (while immensely flexible) has some serious drawbacks for AI comprehension, and there is power in providing an alternative.
AI is a black box. It's hard to understand what it actually does and why it produces what it produces. When you interact with a chat, there is a natural tendency to personify it, and the major AI labs actively encourage this! But it's not a human. In the end, LLMs are information processors. Good results come from thoughtfully constructed context and intentional, structured output. Perhaps it's not so surprising that code is one of the early killer use cases. Code is structured language, so some of the constraints for good results are baked in. But for messier knowledge work, it's less effective, since the data is inherently less structured. It's hard to nudge people towards the behaviors that drive good output in infinite possibility space of a blinking cursor in a chat box.
Flows is a little different. When someone looks at a Flow, they're not just seeing a product interface, they're seeing a visualization of information moving through a system. Context goes in here, transformation happens in the nodes, output lands there. It's literally a mental map, a diagram of the system, that is both functional and visualized on the canvas. And here we come full circle back to modular patching. It's all data, all control signal, all voltage. And for the first time, it has knobs and dials, not just a squishy chat box.
That legibility is powerful because it helps people understand AI in a way they didn't before. The wires make the invisible visible, and comprehension follows.
Wrapping up
It's a well-known truth that creative processes benefit from cross-pollination and collaboration. We think about that explicitly all the time at Miro. It's why we exist, and we try to design this DNA into every interaction. But I'm still a little surprised and moved that when we reached for a new way to interact with AI, this obscure hobby that I hold so much love for provided such a rich source of ideas.
I don't think this is the only answer for AI UI. We have Sidekicks (conversational AI) in Miro as well, and it's for a reason. For certain kinds of thinking, you want to talk to someone! It's an intuitive interaction paradigm, and it's infinitely flexible. My takeaway from developing Flows is that conversation is one of several options, and there's huge value in designing alternatives for tasks with different needs.
I know that many people are anxious about AI right now, for good reasons. (I count myself among them.) But the best version of this technology, and the one I hope to contribute to in some way, is the version that amplifies human creativity. Key to this is giving us better interfaces — not to sit back and accept slop, but to lean forward and be able to tweak, steer, shape, collaborate. Artistic disciplines and creative software have a lot to teach us, as they've been dealing with the design challenge of creating satisfying results in messy, technical collaboration with unpredictable agents for a long time.
As we go further into Flows, Sidekicks, and what we're planning for next year, I'm excited to keep pushing on this — patching cables, tweaking knobs, and seeing what new ideas we discover in the infinite possibility space.