Artificial Intelligence In Music Industry

Artificial Intelligence In Music Industry

Artificial Intelligence In Music Industry : Musicianship, or the collection of skills that constitute musical intelligence, can be described as consisting of three essential elements: listening, performing and composing. Listening to and creating music is a human activity that has always involved thinking and feeling and has also always made use of tools that today include computer technology.

Artificial Intelligence (AI), or more specifically Machine Learning, machines that simulate natural thinking, are already being used successfully to compose music that is indistinguishable from human-created music. It is impossible to prove, however, that AI-generated music is as relevant, useful, impactful or otherwise qualitatively equivalent to that created by living, reasoning and emoting beings.

The generation of new music composed by machines is the culmination of a process of invention of technologies that record, store and reproduce music as well as listen to and learn it. The greatest impact of AI on the music industry to date is its use to enhance the user listening experience on music streaming apps using data analysis.

The future of AI in the music industry promises to be revolutionary. It will continue to dominate distribution and likely eventually marketing and A&R. As far as music creation, AI will become an ever more valuable collaborator to human artists and perhaps one day become capable of standing in for them fully.

The sheer scale of today’s music industry requires AI to replace humans in order to match the available music to consumers and its role will only continue to expand going forward “According to a McKinsey report, 70% of companies will have adopted at least one AI technology by 2030.” (1) here are 60,000 tracks uploaded to Spotify daily –that is one track every second and a half. Based on this, 22 million new tracks will be added this year to the 70 million already hosted on the platform. (2)

The company estimates that there are about 8 million creators behind these recordings, adjusting for the 2.2 million podcasts, there are roughly 7 million musicians among them. Daniel Ek, the CEO of Spotify said recently: “I believe that by 2025, we could have as many as 50 million creators on our platform, whose art is enjoyed by a billion users around the world.” (3) If Ek is right and all other growth trends continue at current rates, there could be 375,000 new tracks a day, or 137 million a year, coming online by 2025. (4)

These daunting dimensions expose how humans cannot handle the ever-expanding musical choices without the help of AI technology. But how does AI know what listeners want to hear?

AI relies on individual histories to make recommendations for new musical options which are more targeted and personalized than record companies and humans could ever be. In this application, AI will still need to be refined both to adjust to the large and rapidly growing volume of material to be sorted and to avoid the limitations of algorithms that by design continually narrow the spectrum of offerings.

The negative impact of Facebook and YouTube algorithms leading people down ideological ”rabbit holes” have recently garnered attention. The parallel to be drawn with AI for music selection is that in the future it needs to integrate possibilities for evolution in tastes based on the multitude of factors impacting an individual, including, for example, aging, additional education, travel, etc.

While AI has most recently been used for marketing and listener-targeting on music streaming apps, it also has been used to create music.

The development of artificial music intelligence began with Alan Turing, of code-breaking fame, who was the first to program a computer to produce musical notes and combine them into tunes in 1951.

The machine produced several melodies, including “God Save the King” and “Baa, Baa Black Sheep.” (5) Over the next several decades, academics built on Turing’s work is expected to continue to enable machines to compose music.

Commercial interests arguably took over the lead in research on music AI from academia in 1988 with the creation of The Sony Computer Science Laboratories (Sony CSL).

In 2002, Sony CSL released “Continuator,” a new algorithm that could play interactively and live with a human musician and learn the human’s music to the point of being able to keep “performing” and composing live when the musician stopped contributing. (6)

Sony CSL went on to develop the music AI software it named “Flow Machines” to compose new pop music. It released its first track, “Daddy’s Car,” in the style of The Beatles in 2016. (7) The music was composed by AI but arranged and produced by Benoit Carré aka Skygge who said of his experience, “I couldn’t have created this music without AI, but no one could have created this music except me.” (8)

Another example of AI-created music is when IBM’s supercomputer Watson collaborated with musician and producer Alex Da Kid to write the song “Not Easy.” Watson contributed the characterization of the then-current era to set the mood of the track by analyzing a vast set of data taken from social media and other online sources. As a result of learning what people wanted, AI produced song, “Not Easy” spent six weeks on Billboard’s Top 40 chart peaking at number 34 in January of 2017. (9)

AI has thus made big advances in assisting artists in creating new music, personalizing existing music and creating entirely new music independently and automatically.

This area of innovation inevitably raises many concerns such as fears about machines replacing humans and thereby eliminating jobs and complications regarding authorship and legal rights. Ed Newton-Rex, Jukedeck’s CEO commented that “a couple of years ago, AI wasn’t at the stage where it could write a piece of music good enough for anyone.

Now it’s good enough for some use cases.” (10) Currently, AI can create music for more mundane purposes, the equivalent of muzak, independently of humans.

According to the World Economic Forum, “… 75 million jobs may be displaced by a shift in the division of labour between humans and machines, while 133 million new roles may emerge that are more adapted to the new division of labour between humans, machines and algorithms.” (11)

These macro projections support the notion that AI will be complementary and augment human experience rather than replace the need for contributions from people. Consistent with this, many participants in the music industry believe that artists today would jump at the opportunity to collaborate with AI. Cliff Fluet, a partner at Lewis Silken, a London-based law firm representing AI startups, remarked that “Every artist [he’s] told about this technology sees it as a whole new box of tricks to play with. Would a young Brian Wilson or Paul McCartney be using this technology? Absolutely!” (12)

AI is already an important contributor to the development and distribution of music and will continue to play increasing roles in the music industry as in all industries, but currently AI is not close to replacing humans in creating musical hits or art. In the future, as it continues to garner data points it will improve and perhaps teach us about how to compose, but can it ever replicate collaborative human emotional experiences? That remains to be seen.

Artificial Intelligence Written by Jaden Yablon



5-7 and 9 Artificial Intelligence and Music — What the Future Holds?, Jeremy Freeman, Feb 22, 2020




Artificial Intelligence In Music Industry

Why Climate Data Matters to Global Music