Meet the AI ​​blasting classic songs to create new tunes

0

[ad_1]

No matter what kind of music you listen to, the art of remixing is an integral part of popular music today. From its earliest roots in the concrete music and dancehall artists of 1960s Jamaica to Cardi B’s latest remix, reorienting and reorganizing songs to create new tracks has long been a way for musicians to discover new styles. new and exciting sounds.

In the early days of electronic music production, music was remixed through physical manipulation on tape, a process mastered by pioneering sound engineers like Delia Derbyshire, King Tubby, and Lee ‘Scratch’ Perry. And the process remained largely unchanged until the advent of digital music.

Now the remix is ​​on the verge of another big transformation – and AI company Audioshake is leading the charge. We spoke to Audioshake co-founder Jessica Powell about how the company uses a sophisticated algorithm to help music creators extract songs from the past to create new content, and potential future applications of the technology in the soundtrack of fun Tik-Tok videos, advertising, and making virtual live music concerts sound great.

Small stems grow mighty tracks

Speaking to TechRadar in between appearances at a conference in Italy, Powell explained to us how Audioshake’s technology works.

“We use AI to break songs down into their parts, which are known to producers as stems – and stems are relevant because you can already do a lot of things with them, like in movies and commercials.” , she explained.

Working with these rods allows producers to manipulate individual elements of a song or soundtrack – for example, lowering the volume of the voice when an on-screen character begins to speak. Rods are also used in everything from creating karaoke tracks, which completely cut off the lead vocals so you can face your favorite band for three minutes, to remixing an Ed Sheeran song to a reggaeton beat. .

And, as Powell explains, rods are used even more today. Space audio technologies like Dolby Atmos take individual parts of a track and place them in a 3D sphere – and when you listen through the right speakers or a great soundbar, you feel like the music is coming. from you from all angles.

a diagram showing a musical track divided into different parts

(Image credit: Audioshake)

So if stems are used so widely in the music industry and beyond, why is Audioshake even needed? Well, record companies don’t always have access to one-track stems – and before the 1960s most popular music consisted of monophonic and two-track recording techniques. And that means the individual parts of those songs – the vocals, the guitars, the drums – couldn’t be separated.

This is where Audioshake comes in. Take any song, upload it to the company’s database, and its algorithm analyzes the track and splits it into any number of stems you specify – all you have to do is select which instruments to listen to. for.

We tried it out ourselves with David Bowie’s Life on Mars. After selecting the approximate instruments we wanted the algorithm to listen to (in this case, vocals, guitar, bass, and drums), it took 30 seconds to analyze the song and break it down into its constituent parts.

From here you can hear each instrument separately: the drums, the droning bass notes, the iconic whine guitar solo, Rick Wakeman’s flaming piano playing, or just Bowie’s vocal track. And the speed at which Audioshake is able to do this is breathtaking.

“If you’re a record company or a music publisher, you can kind of create an instrument on the fly,” says Powell. “You don’t have to go into a DAW (Digital Audio Workstation) like Ableton or Pro Tools to reassemble the song to create the instrumental – it’s right here on demand.”

So how does it work? Well, the algorithm has been trained to recognize and isolate the different parts of a song. It’s surprisingly accurate, especially considering that the algorithm isn’t technically aware of the difference between, say, a cello and a low-frequency synth. There are areas that trip him up, however.

Heavy autotune – Powell uses the example of “artists like T-Pain” – will be identified as a sound effect as opposed to a vocal rod. The algorithm cannot learn from user feedback yet, so this is something that needs to be addressed by the developers, but the fact that these rods can be separated at all is really impressive.

The right to record

Unfortunately, Audioshake’s technology is currently not available to the humble bedroom producer. Right now, the company’s clients are mostly rights holders like record companies or publishers – and while that may be disappointing for anyone who would like to smash an Abba classic before the group’s next virtual residency at London, technology is being used in some really interesting ways.

A song management company, Hipgnosis, which sees songs as investment opportunities as much as works of art, owns the rights to a huge catalog of iconic songs by artists ranging from Fleetwood Mac to Shakira.

Take Van Gogh’s sunflowers. We’re not just going to bring out a sunflower if you don’t want to.

Jessica Powell, Audioshake co-founder

Using Audioshake, Hipgnosis creates stems for these old songs and then gives them to his stable of songwriters “to try to reimagine these songs for the future and present them to a new generation,” as Powell puts it, adding “You can imagine some of these beats in the hands of the right person who can do some really cool things with them.

Owning the rights to these songs makes these things possible – and opening up the technology to the public could be a legal quagmire, with people using and playing artistic creations that are not theirs. It’s not just a legal question, however; for Audioshake, it’s also about ethics, and Powell makes it clear that technology has to work for artists, not against them.

She says the company “really wanted to make sure that we respected the artist’s wishes. If they want to open up their songs and find these new ways to monetize them, we want to be there to help them do that. And if they don’t agree with that, we’re not the one going to help someone break their job without permission.

“Take Van Gogh’s sunflowers,” she adds. “We’re not just going to bring out a sunflower if you don’t want to.

The sound of the future

Traditional pop remixes are just the beginning, however. There are plenty of potential apps for Audioshake that could be opened in the future – and TikTok could be one of the most lucrative.

The possibilities created by giving the creators of TikTok the ability to work with stems to mix tracks in an entertaining way could be an invaluable tool for a social media platform based on short audio and video clips.

It is also possible to improve the sound quality of live music. When an artist is live streaming one of their concerts on a platform like Instagram, unless they can use a live feed from the sound table, the listener is going to hear a whole load of noise and noise. crowd distortion.

“Watch something on Instagram Live and you don’t even stick around – you’d almost rather watch the music video because the sound is bad,” says Powell. By using Audioshake (and with a little delay), you can reduce crowd noise, lower bass, and boost vocals for a clearer audio experience.

a studio music producer

(Image credit: Shutterstock / Standret)

Looking even further, it is possible to use technology to produce adaptive music, that is, music that changes based on your activities.

“It’s more futuristic, but imagine walking down the street listening to Drake,” says Powell. “And then you start running and this song transforms – it’s still Drake’s song, but now it’s almost like a different genre, and that comes from working with the parts of the song, like turning up the intensity. of the drumbeat as you exercise. ”

It looks like adaptive music is a bit far off, but we know audio can already be manipulated depending on your surroundings. Just look at adaptive noise canceling headphones like the Sony WH-1000XM4, which can increase the level of noise cancellation when entering noisy environments – and other headphone models have similar features that automatically adjust the volume of your headphones. your music according to your environment. The XM4’s Speak-to-Chat feature is another example, with the headphones listening to the sound of your voice.

Applications to operate headphones could go even further. With the Apple AirPods 3 rumored to have biometric sensors that will measure everything from your breathing rate to how accurately you can recreate a yoga pose, adaptive music could even be used to boost your workouts when your headphones detect a drop in effort – and Rod-mining technologies like Audioshake could make it easier for artists to monetize their music in this way.

While adaptive music is unlikely to reach our ears for a few years, the idea of ​​opening up songs to make them more interactive and personalize them is just as exciting as the next generation of musicians tapping into the songs of the spent to create new sounds. Hopefully someday humble chamber musicians can also exploit these songs, like picking flowers from a Van Gogh vase.

[ad_2]
Source link

Share.

Leave A Reply