Final Report

This project addresses the growing demand for vocabularies that facilitate interoperability between music related data sources, arising as a result of the profusion of digital audio content. In particular, we focus on content-based audio feature data, which can be utilised for various commercial and research purposes such as music recommendation, automatic genre classification, or query by humming services. Researchers in audio and music information retrieval increasingly use common sets of features to characterise audio material, and large data sets are released for commercial and scientific use. The development of data sets and research tools however is not governed by shared open vocabularies and common data structures. Instead, they typically rely on ad-hoc file formats and information management solutions, often chosen without considering sustainability and research reproducibility. Current formats for describing content-based audio features, including standard as well as generic structured data formats are limited in their extensibility, modularity and interoperability. This limits their ability to support reproducible research, sustainable research tools, and the creation of shared data sets.
We developed an Audio Feature Ontology framework consisting of a shared vocabulary that leverages open linked data formats and provides a flat vocabulary to facilitate task and tool specific ontology development and serves as a descriptive framework for audio feature extraction. The framework encourages a modular approach to information sharing in music informatics workflows by defining a workflow-based model for feature extraction. The Audio Feature Vocabulary was created in order to provide a comprehensive glossary of terms for tool and task specific annotations. The project outputs have been disseminated via a web site, mailing lists, and two presentations at the Extended Semantic Web Conference (May 2013).