PixelPlayer can mute the piano if you just want to listen to the violin.
That’s the outcome of a new AI project out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): a deep-learning system that can look at a video of a musical performance, and isolate the sounds of specific instruments and make them louder or softer.
“Trained on over 60 hours of videos”
Is that a lot? It doesn’t sound like that much :o
“You could even imagine producers taking specific instrument parts and previewing what they would sound like with other instruments (i.e. an electric guitar swapped in for an acoustic one).”
More likely swapping entire symphony orchestras for kazoo’s… but it’s nice to have lofty expectations