Ableton Utility Plugin Download

Downloads McDSP plug-ins can be found on popular audio workstations and live sound systems all over the world. Supported platforms include Pro Tools, Logic, Ableton Live, Digital Performer, Cubase, and VENUE. Hi, I was wondering about the existence of a plugin that lets me view my waveform while I'm working. If someone more talented than me was interested in making it, a plugin for the master that color codes waveforms during playback. So you can see individually the drums, the pads, the melody waveforms overlapping in unique identifiable colors. Utility is a free, lightweight, universal general purpose all in one digital channel strip plugin offering precise controls over various aspects of an audio signal, from mid/side processing to soft clipping. Features include: input routing with separate pans for the left and right channels, or alternatively a pre mono balance control to decide how the.

  1. Ableton Utility Plugin Download 64-bit
  2. Ableton Utility Plugin Download Windows 10
  3. Ableton Live Lite Plugins
  4. Plugin Download For Mac

Instead of just accepting all this recent machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live. Magenta provides a pretty graspable way to get started within a field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies.

Download Magenta Studio for free

Requires Ableton Live 10.1 and Max for Live

Magenta Studio made its first appearance in public at Ableton's Loop conference in LA in November. There, in a talk entitled 'The Computer as Collaborator,' they joined artists in exploring what machine learning means for creativity. Artists YACHT and lucky dragons reflected on what humanness and listening mean to music, and how they've experimented with those media – including YACHT going as far as generating lyrics and melodies. Jesse Engel and Adam Roberts brought perspective from the Google Brain research team, melding engineering and music:

After some more polishing, Magenta Studio is now ready for primetime use since its full release earlier this year. If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. And, if you’re a developer, you can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer. That also has its own Ableton Live device, from a couple years back.

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around with in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level conceptions of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio?

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data that these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate

This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue

This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Ableton Utility Plugin Download

Interpolate

Instead of one clip, use two clips and merge/morph between them.

Groove

Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify

Plugin

Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s…sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

Ableton Utility Plugin Download 64-bit

More to the point with something like Magenta is, do you really get musically useful results? Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

You can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

Ableton Utility Plugin Download Windows 10

As Jesse Engel of Magenta tells us: “We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way. Things like more impressive MIDI generation, state of the art transcription, and new controller paradigms.

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

Ableton Live Lite Plugins

So okay, music makers – have at it:

Download Magenta Studio for free

Requires Ableton Live 10.1 and Max for Live

Plugin Download For Mac


Text: Peter Kirn
A version of this article appeared on Create Digital Music