The Muse Max for Live device is a powerful AI MIDI and Synth Generator compatible with Ableton Live. If you're new, I highly recommend reading this guide, particularly the prompting tips, to help you get the most out of Muse.
Installation and Activation
After starting a subscription trial or purchasing the standalone device, you'll see a download link with installation instructions in your account page

After downloading, simply drag and drop the .amxd device onto a MIDI track in Live, and paste the activation code that appears on the device initialization to https://muse.art/activate to register your computer.
Note: Max for Live requires Ableton Live Suite or Ableton Live Standard with the Max for Live add on.
Interface Breakdown

Basic input box for prompt entry. (Note: A few keyboard shortcuts are not possible in the device due to Ableton Live intercepting them, but copy/cut/paste works fine)
Toggles to control MIDI generation components. Any combination of Melody Chord and Bassline can be used, OR Freeform can be used. These lead to two distinct AI generation pipelines, as described in the Advanced Text to MIDI Prompting section. At least one generation component toggle must be selected for MIDI generation to proceed.
Controls MIDI and/or Synth generation. At least one must be selected. Synth is not compatible with Split Output.
Used to send the currently loaded prompt and settings to the server. Click again while generating to cancel the active request.
- Key: Chromatic key selection
- Scale: Musical scale selection
- Length: Duration of generation in bars (2-16). Lengths beyond 8 bars use 2x credits
- Humanize: Parameter influencing the note onset, velocity, and chord styles
- Sounds: Filter for Synth sound type
- Split Output Toggle: Controls if MIDI is to be combined into a single clip or split into multiple clips when using Melody/Chord/Bassline
- Output Selectors: Select output MIDI track from available tracks in the Live set
- HQ Mode: Let's the planning model spend more time thinking about how to fulfill your request. Slower, but leads to better results. Costs 2x credits
- Estimated generation time when not generating
- Realtime status updates on generation progress while generating
- Chord names provided after generation completes
General Usage
A prompt is required for every generation, and MIDI and/or Synth generation type must be selected. A MIDI generation component (Melody/Chord/Bass or Freeform) must also be selected if MIDI is enabled.
MIDI generation is the core of the Muse M4L device. After sending a prompt, you'll see an empty MIDI clip appear on the selected track in the arrangement view of your Live set. The clip generates at the earliest available position that can fit the length. It is recommended not to move this clip until after the MIDI generation is finished. You'll see the notes stream to the clip as they are generated, and most generations will be complete within 30 seconds. After generation finishes, you can treat it like any other MIDI clip.
The synth functionality lets you create brand new presets for Ableton's stock Wavetable synth from a prompt. The Wavetable synth has some limitations, and works best for analog keys and pads, but it's a fun way to explore sounds you might otherwise. If synth is enabled, the Split Output toggle in the must be turned off. After sending a synth request, on MacOS you'll see a blank Wavetable synth instrument loaded onto the selected track. Windows users must manually add the Wavetable instrument prior to generating. As synth generation proceeds (usually 10-20 seconds), you'll see the generated preset parameters update directly on the instrument.
It is possible to have multiple Muse M4L devices open in one set, but we recommend having only one in use at a time and using the output selectors to route generations to other tracks.
Each request costs credits. You can check your current credit usage on your account page at https://muse.art/account. By default, Muse Pro subscribers receive a monthly allotment of 200 credits.
Basic Text-to-MIDI Prompting
Muse is built on a combination of frontier LLMs and custom fine-tuned LLMs, which means the prompting style you should use is closer to conversational (like ChatGPT) rather than caption-based (like Midjourney).
For example, a strong prompt for a diffusion model might be something like: "photography, dynamic portrait of a woman, vibrant orange and deep blue motion blur effects, movement and powerful energy, vivid yellow background that enhances the perception of speed, bokeh effect, ultra realistic, 16K, rich detailing --ar 9:16 --style raw". These models are trained on image and text caption pairs, so you are essentially imagining a caption for the image you want to see.
Meanwhile, a strong prompt for Muse focuses on providing the model with context so it can decide how to best serve the song. This type of prompting is most effective when you already have a track in progress, or if you have a clear vision for what you want to make.
Example: "I'm building out a microhouse track and looking for a melody to keep things interesting in the B section. It will be played on an analog synth similar in tone to a moog saw, and I've already decided on the foundational chord progression for the track - A, Bm, G, D. While the melody is playing there will be some ambient pads and a drum build up. I'm looking for something wandering, mysterious, repetitive, and arpeggiated, heavily inspired by Four Tet."
Alternatively, you can prompt Muse into exploring it's own creativity by giving an ambiguous message. This type of prompting is a lot of fun and a great way to start songs. You could provide a single word, an emotion or memory, a poem or song lyric, and get out a MIDI progression reflective of the models interpretation. This is a more experimental approach - less consistent but great for coming up with new ideas that spark your own creativity.
Advanced Text-to-MIDI Prompting
To get even more out of your prompts, it is helpful to have some background on how Muse MIDI generation works under the hood. The first thing to know is that the components you select have a huge impact on how the AI generation happens.
Generation Components (Melody, Chord, Bass | Freeform)
The Melody / Chord / Bass (MCB) toggles follow one generation pipeline, and the Freeform follows another.
For the MCB route, generation happens as follows:
- Chord planning based on your prompt, including selection of chord styles (more on these below).
Note: This step always happens in the MCB route, even if chords are not enabled - Chord generation begins if enabled
- Chord plan is used for Melody and/or Bassline planning if enabled.
This ensures that all of the components are cohesive and following the same overall chord progression - Melody and/or Bassline generation begins
For the Freeform route, the model creates a single plan and begins generating everything in one step. Freeform generation currently is set to have chord blocks disabled, meaning it generates each note individually.
How does this affect your generations?
- MCB route uses more advanced chord generation based on chord styles
- MCB route has dedicated planning for each component, making it more consistent
- MCB route allows for Split Output, letting you generate MIDI for three tracks at once
- Freeform generation is quick and generalizable
- Freeform generation let's the model be more expressive, which can make results less consistent but tends to lead to more frequent moments of brilliance
If you want chords in your generation you should definitely use the CMB route. If you're looking for something unorthodox, Freeform will give you more out of the box ideas.
Chord Styles
The chord generation model is trained to look for key chord playing styles, like 'block', 'roll', 'arp', or 'bounce'. By default, the planning model will try to understand the intent of your prompt, and select chord playing styles for each chord in the progression. The Humanize knob has a big effect on which chord playing styles are used - lower humanize means 'block' chords are more likely, and higher humanize invites more unorthodox styles.
You can control which chord styles are used by simply including them in your prompt.
"G minor progression with block chords"
"G minor progression with beautiful rolling chords"
"G minor progression with rhythmic bouncing chords"
The following chord styles are currently available, with more coming soon:
- block: standard sustained chord
- roll: smooth, rolling arpeggios
- strum: sustained chord with staggered onset
- arp: repeating on-the-grid arpeggios
- push: main chord followed by anticipatory hit
- bounce: rhythmic alternation of chord voicings
- pulse: repeated full chord hits with velocity variation
- flutter: rapid alternation between note subset
- spray: random note selection with controlled density
- swell: rapid chord hits with velocity-controlled crescendo/diminuendo
Try experimenting with different chord styles in your prompt. Once you get the hang of it, it becomes a powerful way to come up with new ideas.
Workflow Suggestions
Muse's is designed to try to read your intent from any prompt, but the more explicit you make your prompt, the less it has to guess. Muse works best at the extremes - I recommend either providing a prompt ample context on your track, instrument it will be played on, other musical elements in the track, etc. or providing something abstract and artistic to let the model showcase it's own creativity.
Iterative prompting is a great way to hone in on a great composition. If you send a prompt and something find something in a generation that you don't like, try telling the model not to do that in your prompt and try again. For example, perhaps I prompt "4 bar chords E minor for a moody piano intro" and get out something like this:

There's nothing wrong with it, but the chords are all blocky, and I'm looking for something with more flow
If I keep the same prompt but adjust it a bit "4 bar chords E minor for a moody piano intro, flowing" and up the humanize value from 0 to 15, I get this:

Chaining chord names between generation requests is a great way to use Muse to build up a track. After a generation through the MCB route finishes, you'll see the chords that were used listed in the Status Panel:

If you're a musician, you can use this to help you play along on your instrument. Or, you can select and copy these, and paste them to another prompt to create a new generation that complements your original. This is an extremely powerful approach - it shifts Muse from a song starter/idea generator to a full composition assistant you can use to build out an entire track.