• Dr. Timothy Smith

Yes, We Need Song Writers and Musicians

Christmas Trimmings

Photo Source: Pexels

(This post was originally published on December 23rd, 2017.)

“Music has always been a matter of Energy to me, a question of Fuel. Sentimental people call it Inspiration, but what they really mean is Fuel. I have always needed Fuel. I am a serious consumer. On some nights I still believe that a car with the gas needle on empty can run about fifty more miles if you have the right music very loud on the radio.”

― Hunter S. Thompson

Writing songs and playing music illuminates and powers human cultures across the world and has throughout history. The lyrics, melodies, and rhythms of music tell stories that cover every emotion. Songs move us and bring joy and inspiration. Music, a remarkable human creation, resists automation, but that does not prevent people from trying to find a way for machines to write and play music for us. According to Statistica, the statistics portal, the global music industry revenue totaled nearly $48 billion worldwide in 2016, and Sony corporation alone accounted for 22% of that revenue. It certainly follows that Sony would want to be at the forefront of new music technologies. At the Sony Computer Science Laboratories Paris, Sony Corporation Songs continues to pursue the goal of using artificial intelligence to autonomously create music by itself or in collaboration with human composers after 20 years of research. By training their artificial intelligence in certain musical styles, their AI can write new songs autonomously in that style. Their research project funded in part by the European Research Council goes by the name Flow Machines, and on their website, flow-machines.com, they feature a song called “Daddy’s Car.” The researchers trained their AI on Beatles’ music, and with lyrics provided by the professional composer, Benoît Carré. Listening to “Daddy’s Car,” one hears elements of Beatles rhythms and melodies, but it sounds empty. The researchers input thousands of sheets of music for the AI to learn from, and they claim to generate novel music in genres from the Baroque era composer J.S. Bach to greats such as Duke Ellington and Irving Berlin, but they don’t capture the spirit that the musicians invoke. (If you're interested, you can listen to the song here.)

Microsoft, in an ambitious attempt to use artificial intelligence to autonomously write both the music and the lyrics for new songs, worked with Capgemini Norway, an information technology company, and Microsoft Norway to try to write a new Christmas song in two weeks based on the computer’s analysis of 50 classic Christmas songs. Their artificial intelligence searched for patterns in the songs to predict what would make a new Christmas classic. The resulting lyrics in the song titled “Joyful Time in the City” contain some recognizable elements of Christmas hits and some unusual additions. The lyrics used in the song come from the AI and refined by Thomas Holm and Tommy “Manboy” Akerholdt. The lyrics follow:

Joyful time in the city

With a frosty go glistening

In the single beginning

Here comes Santa Claus

Underneath the angel dreams

We smiled at the villages in our hearts

So we smile in our old ways

Joyful time in the city

Silver bells in the blind-greasy

Players of your sinners say

They bend for the God born king

Bye bye lullaby

Underneath the angel dreams

So we smile…

Soon it will be once upon a time

I used to be looking your way

And so we smile in our old ways

Joyful time in the city

The line “Silver bells in the blind-greasy,” contains one apparent reference and a funny unintelligible one. The song linked below gets a heartfelt rendition and video featuring the Norwegian singer Thomas Holm. Overall, the attempt using artificial intelligence to write a new Christmas song based on patterns from past hits makes more of a mix of bits and not a cohesive song with a story.

Perhaps the notion that the patterns in songs provide sufficient fuel to create new songs does not hold up. In an interesting study by a group of researchers from the Max Plank Institute in Germany and The Broad Institute in Cambridge, MA titled “The Nature and Perception of Fluctuations in Human Musical Rhythms” published in the journal PLOS ONE, the authors show listeners do not prefer rhythmic perfection. When musicians play, they may be at times during a piece a little ahead of the beat and at other times behind the beat. Listeners do not generally like mechanically precise rhythms; rather, the musicians playing together ahead or behind the beat produces a more enjoyable experience over the full length of the piece. The more complex analysis of rhythm suggests a deeper emotional expression that musicians bring to musical performances beyond what sheet music can express. Similarly, for voices, the concept “vocal generosity” refers to the general acceptance of vocal imperfections. In “The Vocal Generosity Effect: How Bad Can Your Singing Be,” the authors show that audiences accept vocal imperfections to a degree and show more generosity to voices than instruments for being slightly out of tune.

Music embodies one of the most vibrant expressions of human art and emotion. It surrounds us and also drives a massive global industry. Artificial intelligence continues to impact many areas of industry and society with advances in finance, security, and automation. It follows that research for using artificial intelligence for automating songwriting and musical production appeals to researchers, music producers and some musicians who like to experiment with new technologies. Although research continues and artificial intelligence provides some examples of songs composed in the style of specific musical eras, the songs remain unimpressive. It stands to reason that songwriting and performance consist of more than the identification of patterns and precise technical performance. In addition to good storytelling, the acceptance and even preference for imperfection in rhythm and voice reveals another layer of what makes great music and highlights the need for real songwriters and musicians.

Dr. Timothy Smith

Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.

You can buy his book on Amazon in paperback and in kindle format here.

How to Profit and Protect Yourself from Artificial Intelligence