Twitter
Advertisement

Google is launching a project to see if AI can create original artworks

Magenta is an experimental project to see if artifical intelligence is capable of ccomposing original songs and art.

Latest News
article-main
Adam Roberts, demonstrating one experiment from Google's Project Magenta.
FacebookTwitterWhatsappLinkedin

It’s old news that Google has it’s arms elbow deep in artificial intelligence research, but this latest tidbit is a tad more interesting. Google is apparently working on a project to see if AI can be independently creative.

Project Magenta was unveiled at Moogfest in North Carolina this past weekend by Google Brain (the company’s AI arm) researcher Douglas Eck. The goal is to use “train” AI in the basic nuances of visual art and music, and see if it can create something original on its own, as opposed to just replicating.

Project Magenta uses Google’s open-source machine learning code, TensorFlow, to give the AI art lessons. In the same way, Quartz reports Magenta will also make its tools available to the general public on GitHub, once it officially launches on June 1. The first tool to be launched is a program that will help other researchers import music data from MIDI files into TensorFlow, in order to train their own AI. While Eck did admit that artificial intelligence is still far away from achieving true creativity, the idea is to help researchers across the globe investigate the possibilities today and in the future.

To put into context what that might look like, the team demonstrated a digital synthesizer program it was tinkering with, as part of the project. Magenta team member Adam Roberts played a few notes to the AI, and it played back what it thought was a more “complete” melody to his three notes. Granted, it’s a little repetitive, but this is still a very early stage and the possibilities are definitely there.



While Eck says he doesn’t see computer programs replacing your favourite pop stars anytime soon, there are definitely applications for the technology. For instance, if a user’s fitness tracker detects a high heart rate, indicative of stress when not accompanied by exercise, AI built into the phone can then generate soothing music, based on the it’s own “training”, as well as learning from the user’s listening patterns.

Magenta is a spin-off from Google’s last major AI breakthrough, DeepDream, that saw AI trained to see patterns in photos, sometimes patterns that weren’t even there. The result was a trippy mishmash of the input photo, with inserted colours, swirl patterns, and even object shapes. Like this:



After completing the first milestone with music, Eck says his team will move on to then tackle creating original images and even videos. Here’s to more nightmare fuel!

 

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement