TensorFlow audio models in Essentia

Essentia TensorFlow wrapper

We are happy to announce recent updates to the Essentia audio and music analysis library introducing TensorFlow audio models!

Find more about new algorithms and pre-trained models for deep learning inference that you can use in C++ and Python applications in our blog posts.

The algorithms we developed provide a wrapper for TensorFlow in Essentia, designed to offer the flexibility of use, easy extensibility, and real-time inference. They allow using virtually any TensorFlow model within our audio analysis framework.

The wrapper comes along with a collection of music auto-tagging models and transfer learning classifiers that can be used out of the box.

Some of our models can work in real time, opening many possibilities for audio developers. We provide a demo on how to do that.

For example, here is the MusiCNN model performing music auto-tagging on a live audio stream.

You can use all new functionality and models for deep learning inference to develop your entire audio analysis pipeline in C++ or Python. For quick prototyping, we provide Python wheels on Linux (pip install essentia-tensorflow, pip version ≥19.3).

See our ICASSP 2020 paper for more details.

rss facebook twitter github youtube mail spotify instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora