Today is my birthday, and I’ve decided to open a time capsule.

Eighteen years ago, we started building Myousica โ€” a platform for collaborative music creation in the browser. Record from your microphone, upload tracks, remix other people’s music, build songs together with strangers across the internet. We launched in September 2008 after nine months of development.

It was a startup. It ran for about five months before being paused, and the source code was eventually released on GitHub under the name Mewsic. I wrote about the technical details in a three-part series: the Rails platform, the Flash multitrack editor, and the audio pipeline. Those posts cover the engineering. This one is about the bigger picture.

The right idea at the wrong timeยถ

The core concept was solid: let anyone make music in a web browser, collaboratively. No software to install. Open your browser, pick a song, add your guitar track, share the result. A musician in Rome could start a beat, someone in Tokyo could add bass, a singer in Sรฃo Paulo could lay down vocals on top. All in the browser.

The problem was that in 2008, browsers couldn’t do any of this natively.

To capture audio from a microphone, you needed Flash โ€” an ActionScript front-end running in the Flash Player plugin. To stream that audio to a server, you needed RTMP โ€” a Java media server (Red5) just to receive the audio and write it to disk as FLV files. To turn those FLV files into playable MP3s, you needed a pipeline of ffmpeg, sox, and background workers on the server side. To display a waveform, you rendered it as a PNG โ€” the Canvas API wasn’t mature enough. To play back multiple tracks in sync, you built a custom playback engine in ActionScript with frame-accurate timing.

The entire architecture existed to compensate for what the browser couldn’t do. Four separate services, ~2,000 commits, half a dozen external tools โ€” all to achieve something that the Web Audio API would later make possible in a few hundred lines of JavaScript.

Accidental microservicesยถ

Here’s a fun detail: our four-service architecture โ€” Rails app, Flash multitrack, Red5 media server, audio processing uploader โ€” predates the term “microservices.” James Lewis presented the concept at 33rd Degree in Krakรณw in 2012, and Martin Fowler popularized it in 2014. We didn’t call our architecture anything. We just needed separate services because one Rails app couldn’t handle audio transcoding, real-time RTMP streaming, and a multitrack editor at the same time.

But looking back, that’s what it was: independent services communicating via HTTP callbacks, stateless token-based authentication between them, shared nothing except the filesystem for audio spools. The uploader didn’t know about users or songs โ€” it just processed audio files and called back to the main app when done. Red5 didn’t know about anything โ€” it just recorded RTMP streams to disk. Each service had one job.

We just didn’t have a name for the pattern yet. To be fair, it was one extra service โ€” not exactly a distributed system manifesto. But it’s amusing that what we considered “just common sense” would become a whole architecture movement a few years later.

What exists todayยถ

Open BandLab in your browser right now. You’ll find a full multitrack editor with recording, virtual instruments, effects, real-time collaboration, sharing. Free. Over sixty million users. Founded in 2015.

Soundtrap launched in 2012, was acquired by Spotify in 2017, and sold back to its founders in 2023. Browser-based collaborative music studio. Multiple people editing the same project in real time.

Splice launched in 2013. Cloud-based collaboration with version control for music projects โ€” like Git for DAW sessions โ€” plus a massive royalty-free sample marketplace.

They all do what Myousica did. Record in the browser. Layer tracks. Collaborate with other musicians. Build songs together. The difference is that they launched when the technology was ready: the Web Audio API for native audio processing, WebRTC for real-time streaming, the MediaRecorder API for microphone access, Web Workers for multithreading, and the kind of bandwidth that doesn’t make you choose between streaming audio and loading a webpage.

We built the same thing eight years earlier, and we had to build half the browser to do it.

What remainsยถ

The code is on GitHub. Five repositories, from the Rails app to the ActionScript multitrack to the audio pipeline. Not as a product โ€” as a time capsule. A record of what it took to do browser-based collaborative audio in 2008, before any of the APIs existed to make it reasonable.

I’m proud of what we built. Vaclav Vancura designed an extraordinary multitrack editor in ActionScript โ€” 7,000 lines of beautifully architected code. Andrea Franz and Giovanni Intini built the foundations of both the main app and the uploader. Fabio Grande designed the visual identity โ€” the UI, the logo, the whole look and feel. And the five of us, across ~2,000 commits, shipped a collaborative music platform that actually worked. You could open a browser, record a track, and jam with someone on the other side of the planet. In 2008.

Was Myousica a commercial success? No. Was the idea right? Sixty million BandLab users say so.

We were just too early.


Contents