Algorithmic Music Influenced by Tweets

Every now and then I make some addition to my algorithmic music composer. And now I decided to allow users to hear music that is based on their tweets.

It is nothing extraordinary, but I wanted to get some experience with basic natural language processing, so I decided to analyze the latest 200 tweets of the user, and perform the following analysis, which in turn influences the music that the user gets:

  • Sentiment analysis – I pass each tweet to the Stanford deep learning sentiment analysis (part of CoreNLP), which influences the musical scale – if the average sentiment of all tweets is: positive => major scale, negative => minor scale. Neutral leads to either lydian or dorian scale. Let me elaborate on “average sentiment” – the sentiment of each tweet is a number between 0 (very negative) and 4 (very positive). The average then, obviously, is the sum of the sentiments / number tweets.
  • Tweet length – if your tweets are on average shorter than 40 characters, the scale is pentatonic. Otherwise it’s heptatonic
  • Tweet frequency – the average interval between your tweets determines the tempo of the music. The more tweets you post (and the lower the time between them is), the faster the music. Your tweet tempo is your tweets’ music tempo.
  • Variation – now that’s the most fuzzy metric, and the algorithm that I use is the most naive one. I extract all words from all tweets, apply a stemmer (that is, transform them to their base form, e.g. eats -> eat, thieves -> thief, etc., again part of CoreNLP), remove the stop words (and, or, then, etc. linking words). Then I calculate a topic threshold – the number of times a keywords must be present in order to be considered a topic of your tweets. Then I count the topics and the more topics you have, the more variation in the melody there is. The “variation in the melody” is not a standard metric of music, as far as I know, and I define it as the average distance between notes in the main part. So the more topics you have, the more up-and-down your music should be.

Now onto the technical challenges. Sentiment analysis is a heavy process, CoreNLP’s pipeline class is not thread-safe, and has to load a model each time. In addition to that, fetching 200 tweets is not that fast either. So I couldn’t make getting the corresponding music real-time. I had to use some sort of a queue. I decided to keep it simple and use my database (MySQL) as a queue – whenever a request for tweets-to-music comes, I insert the user id in a “twitter_music_requests” table. Then every X seconds (delay, meaning two runs cannot overlap) a scheduled job runs, picks the latest request with a flag “processed=false”, and processes it. That way only one thread processes requests at a time, meaning I can reuse the pipeline object and not load the coreNLP model every time. The method that does it is synchronized, to enforce that constraint. When done with the whole process, the scheduled job marks the request as processed and sends an email to the user with a link to their musical piece.

As the music generation process is rather CPU-intensive (the conversion from midi to mp3, actually; the rest is pretty fast), I’ve decided to never generate piece on-the-fly, but instead pre-generate them and play the latest generated ones. The twitter music is no exception, and even though it runs in the background, I still use that approach to speed up the process – the piece is not actually newly generated, but is located in the existing database, strictly based on the criteria listed above.

The end result can be found here, and I hope it is interesting. It was an interesting experiment for me, and although it seems rather pointless, it’s… cool.

Every now and then I make some addition to my algorithmic music composer. And now I decided to allow users to hear music that is based on their tweets.

It is nothing extraordinary, but I wanted to get some experience with basic natural language processing, so I decided to analyze the latest 200 tweets of the user, and perform the following analysis, which in turn influences the music that the user gets:

  • Sentiment analysis – I pass each tweet to the Stanford deep learning sentiment analysis (part of CoreNLP), which influences the musical scale – if the average sentiment of all tweets is: positive => major scale, negative => minor scale. Neutral leads to either lydian or dorian scale. Let me elaborate on “average sentiment” – the sentiment of each tweet is a number between 0 (very negative) and 4 (very positive). The average then, obviously, is the sum of the sentiments / number tweets.
  • Tweet length – if your tweets are on average shorter than 40 characters, the scale is pentatonic. Otherwise it’s heptatonic
  • Tweet frequency – the average interval between your tweets determines the tempo of the music. The more tweets you post (and the lower the time between them is), the faster the music. Your tweet tempo is your tweets’ music tempo.
  • Variation – now that’s the most fuzzy metric, and the algorithm that I use is the most naive one. I extract all words from all tweets, apply a stemmer (that is, transform them to their base form, e.g. eats -> eat, thieves -> thief, etc., again part of CoreNLP), remove the stop words (and, or, then, etc. linking words). Then I calculate a topic threshold – the number of times a keywords must be present in order to be considered a topic of your tweets. Then I count the topics and the more topics you have, the more variation in the melody there is. The “variation in the melody” is not a standard metric of music, as far as I know, and I define it as the average distance between notes in the main part. So the more topics you have, the more up-and-down your music should be.

Now onto the technical challenges. Sentiment analysis is a heavy process, CoreNLP’s pipeline class is not thread-safe, and has to load a model each time. In addition to that, fetching 200 tweets is not that fast either. So I couldn’t make getting the corresponding music real-time. I had to use some sort of a queue. I decided to keep it simple and use my database (MySQL) as a queue – whenever a request for tweets-to-music comes, I insert the user id in a “twitter_music_requests” table. Then every X seconds (delay, meaning two runs cannot overlap) a scheduled job runs, picks the latest request with a flag “processed=false”, and processes it. That way only one thread processes requests at a time, meaning I can reuse the pipeline object and not load the coreNLP model every time. The method that does it is synchronized, to enforce that constraint. When done with the whole process, the scheduled job marks the request as processed and sends an email to the user with a link to their musical piece.

As the music generation process is rather CPU-intensive (the conversion from midi to mp3, actually; the rest is pretty fast), I’ve decided to never generate piece on-the-fly, but instead pre-generate them and play the latest generated ones. The twitter music is no exception, and even though it runs in the background, I still use that approach to speed up the process – the piece is not actually newly generated, but is located in the existing database, strictly based on the criteria listed above.

The end result can be found here, and I hope it is interesting. It was an interesting experiment for me, and although it seems rather pointless, it’s… cool.