Skip to Content

June 2013

MediaMined video

I'm woefully late in pointing this out, but there is now a video done by Matt Hines and Jay Leboeuf explaining MediaMined.

ccrma.png

ccrma.png

icmpc.png

icmpc.png

icmpc.png

icmpc.png

psychomusicology.png

psychomusicology.png

psychomusicology.png

psychomusicology.png

psychomusicology.png

psychomusicology.png

Probing neural mechanisms of music perception, cognition, and performance using multivariate decoding

psychomusicology.png
Authors: 
Rebecca S. Schaefer, Shinichi Furuya, Leigh M. Smith, Blair Bohannan Kaneshiro and Petri Toiviainen

Psychomusicology: Music, Mind and Brain, 22(2):168–174, 2012

Abstract: 

Recent neuroscience research has shown increasing use of multivariate decoding methods and machine learning. These methods, by uncovering the source and nature of informative variance in large data sets, invert the classical direction of inference that attempts to explain brain activity from mental state variables or stimulus features. However, these techniques are not yet commonly used among music researchers. In this position article, we introduce some key features of machine learning methods and review their use in the field of cognitive and behavioral neuroscience of music. We argue for the great potential of these methods in decoding multiple data types, specifically audio waveforms, electroen- cephalography, functional MRI, and motion capture data. By finding the most informative aspects of stimulus and performance data, hypotheses can be generated pertaining to how the brain processes incoming musical information and generates behavioral output, respectively. Importantly, these methods are also applicable to different neural and physiological data types such as magnetoencephalography, near-infrared spectroscopy, positron emission tomography, and electromyography.

Automated classification of music genre, sound objects, and speech by machine learning.

icmpc.png
Authors: 
Leigh M. Smith, Stephen T. Pope, Jay Leboeuf and Steve Tjoa

Proceedings of the 12th International Conference on Music Perception and Cognition, page 943, Thessaloniki, Greece, July 2012. ICMPC/ESCOM. (abstract).

Abstract: 

A software system, MediaMined, is described for the efficient analysis and classification of auditory signals. This system has been applied to the tasks of musical instrument identification, classifying musical genre, distinguishing between music and speech, and detection of the gender of human speakers. For each of these tasks, the same algorithm is applied, consisting of low-level signal analysis, statistical processing and perceptual modeling for feature extraction, and then supervised learning of sound classes. Given a ground truth dataset of audio examples, textual descriptive classification labels are then produced. Such labels are suitable for use in automating content interpretation (auditioning) and content retrieval, mixing and signal processing. A multidimensional feature vector is calculated from statistical and perceptual processing of low level signal analysis in the spectral and temporal domains. Machine learning techniques such as support vector machines are applied to produce classification labels given a selected taxonomy. The system is evaluated on large annotated ground truth datasets (n > 30000) and demonstrates success rates (F-measures) greater than 70% correct retrieval, depending on the task. Issues arising from labeling and balancing training sets are discussed. The performance of classification of audio using machine learning methods demonstrates the relative contribution of bottom-up signal derived features and data oriented classification processes to human cognition. Such demonstrations then sharpen the question as to the contribution of top-down, expectation based processes in human auditory cognition.

Copyright