Facebook AI Can Turn One Style of Music into Another

Facebook AI Can Turn One Style of Music into Another

⯀ Facebook's AI team is first AI research team to create an unsupervised learning method for recreating high-fidelity music with a neural network. The system is capable of translating input tracks from one music style to another.


Creating music seems universal to our species, crossing every culture. Whether singing, whistling, clapping, or, after some training, playing improvised or standard musical instruments, this ability is not unique to us, and there are many other vocal mimicking species that are able to repeat a music from hearing. Now Facebook’s AI Research division (FAIR) has developed a neural network capable of converting one piece of music into another with a totally different style.

Music_algorithm


The research is published in a white paper, available online. The title of the paper is, 'A Universal Music Translation Network'.

With the system if the input piece were by a jazz band, the system can output that exact song in a completely different style of music, like a hard rock band, for example, reports TNW.  In another example, the AI takes a symphony orchestra playing Bach, and translates it into the same song played on a piano in the style of Beethoven.

Perhaps the piece that best exemplifies the potential of such technology, is when the input is a simple familiar tune that is whistled. Facebook's AI translates the basic input into a symphonic piece.

How much of our music in the future will spring from such humble beginning utterances.  It may remind you of how Michael Jackson used to generate much of his music.

The results are very impressive. Even Facebook’s AI researchers have not been modest about their achievement:

Our results present abilities that are, as far as we know, unheard of. Asked to convert one musical instrument to another, our network is on par or slightly worse than professional musicians. Many times, people find it hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument.

You can listen to a series of original inputs (songs played by a real band) and the AI’s outputs (the same song then translated into a different style) in the video below.

The researchers report:

Our work could open the way to other high-level tasks, such as transcription of music and automatic composition of music. For the first task, the universal encoder may be suitable since it captures the required information in a way, just like score sheets, that is instrument dependent. For the second task, we have initial results that we find interesting.  By reducing the size of the latent space, the decoders become more “creative” and produce outputs that are natural yet novel, in the sense that the
exact association with the original input is lost.




The generated output for the track:

  • 1. String Quartet, Haydn (0:11
  • 2. Cantata Opera, Bach (0:47
  • 3. Classical Guitar (0:58
  • 4. Symphony, Mozart (1:09
  • 5. Music of Africa (1:27
  • 6. Whistling (human) (1:45
  • 7. String Quartet, Haydn (2:15
  • 8. Trumpet & Orchestra (2:50
  • 9. Guitar (3:07
  • 10. Electric Guitar, Charlie Christian (3:19
  • 11. Piano, Beethoven (3:31
  • 12. Electric Guitar, Metallica (4:01) 1
  • 3. Always on my mind (Midi), Elvis Presley (4:13
  • 14. We find love (Midi), Rihanna (4:40
  • 15. Cantata Opera, Bach (4:51
  • 16. Piano, Beethoven (5:21
  • 17. Harpsichord, Bach (5:45
  • 18. Piano, Beethoven (5:57
  • 19. Harpsichord, Bach (6:11
  • 20. Cantata Opera, Bach (6:28
  • 21. String Quartet, Haydn (6:41)

SOURCE  Fast Company


By  33rd Square





Comments

Popular Posts