The first day of the conference (October 26, Monday) will consist of a parallel session of tutorials.

Morning Tutorials:

Afternoon Tutorials:

Tutorial AM 1 (10:00-13:00): MIR at the Scale of the Web

by Malcolm Slaney (Yahoo! Research), and Michael Casey (Dartmouth College and University of London)

Contact: malcolm [at] ieee.org


In the last couple of years we have received access to music databases with millions of songs. This massive change in the amount of data available to researchers is changing the face of MIR. In many domains, speech-recognition is most notable, people have observed that the best way to improve their algorithm's performance is to add more data. Starting with hidden-Markov models (HMMs) and support-vector machines, people have applied ever greater amounts of data to their problems and been rewarded with new levels of performance. What are the algorithms and ideas that are necessary to work with such large databases? How do we define the scope of a problem, and how do we apply modern clusters of processors to these problems? What does it take to collect, manage and deliver solutions with millions of songs and terabytes of data? In this tutorial we will talk about a range of algorithms and tools that make it easy/easier to scale our work to Internet-sized collections of music. The field is just developing so this tutorial will talk about a range of techniques that are in use today. Millions of songs fit into a small number of terabytes, which is just a few hundred dollars of disk space. This tutorial will give attendees the tools they need to make use of this data. This tutorial will give attendees an overview and pointers to the tools that will allow them to scale their work to modern datasets. The tutorial will discuss the theoretical and practical problem with large data, applications where large amounts of data are important to consider, types of algorithms that are practical with such large datasets, and examples of implementation techniques that make these algorithms practical. The tutorial will be illustrated with many real-world examples and results.

Biographies of the Presenters

Dr. Malcolm Slaney is a principal scientist at Yahoo! Research. There he has been working on music- and image-retrieval algorithms in databases with billions of items. He is a senior member of the IEEE and an Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing. He has given successful tutorials at ICASSP 1996 and 2009 on "Applications of Psychoacoustics to Signal Processing" and on "Multimedia Information Retrieval" at SIGIR and ICASSP. He is a (consulting) Professor at Stanford CCRMA where he has led the Hearing Seminar for the last 18 years.
Michael Casey is Professor of Music and director of the graduate program in Digital Music at Dartmouth College, USA, and Professor of Computer Science at Goldsmiths, University of London, UK. He received his Ph.D. from the MIT Media Laboratory in 1998 in the fields of statistical audio. His recent activities include forming the OMRAS2 (Online Music Recognition and Searching) group at Goldsmiths, for which he served as Principal Investigator, and co-authoring AudioDB: an open-source, multimedia search engine that scales to billions of items.

Tutorial AM 2 (10:00-13:00): Mining the Social Web for Music-Related Data: A Hands-on Tutorial

by Claudio Baccigalupo (Spanish Council for Scientific Research), and Ben Fields (University of London)

The Tutorial Website: http://ismir2009.benfields.net (external link)


The social web is a useful resource for those conducting research in music informatics. Yet there exists no "standard" way to integrate web-based data with other more common signal-based music informatics methods. In this tutorial we go through the entire process of retrieving and leveraging data from the social web for MIR tasks. This is done through the use of hands-on examples intended to introduce the larger ISMIR community to web-mining techniques.
The intended audience is formed of people who are familiar with other MIR techniques (principally content-based) and who can benefit from knowledge available on the web to improve their algorithms and evaluation processes. The tutorial presents a series of short snippets of code to rapidly retrieve musical information from the web in the form of genre-labelled audio excerpts, tags, lyrics, social experiences, acoustic analyses or similarity measures for millions of songs. More information about the tutorial can be found at http://ismir2009.benfields.net.

Biographies of the Presenters

Claudio Baccigalupo is a PhD candidate at the Artificial Intelligence Research Institute (IIIA-CSIC), with the thesis discussion expected in November 2009. He holds a 5-year degree in Computer Technology with top marks and distinction. His research focuses on recommender systems in a musical context: he investigated how to extract musical knowledge from the analysis of playlists and how to customise radio channels for groups of listeners.
Benjamin Fields is a PhD candidate with the Intelligent Sound and Music Systems (ISMS) research group at the Department of Computing, Goldsmiths, University of London, with his dissertation submission anticipated in late spring 2010. His current research centers on applications to understand and exploit the semantic gap between the social relationships of artists and the acoustic similarity of works these artists produce.

Tutorial PM 1 (14:30-17:30): Using Visualizations for Music Discovery

by Justin Donaldson (Indiana University), and Paul Lamere (The Echo Nest)

The Tutorial Website: http://musicviz.googlepages.com/home (external link)


As the world of online music grows, tools for helping people find new and interesting music in these extremely large collections become increasingly important. In this tutorial we look at one such tool that can be used to help people explore large music collections: information visualization. We survey the state-of-the-art in visualization for music discovery in commercial and research systems. Using numerous examples, we explore different algorithms and techniques that can be used to visualize large and complex music spaces, focusing on the advantages and the disadvantages of the various techniques. We investigate user factors that affect the usefulness of a visualization and we suggest possible areas of exploration for future research.

Biographies of the Presenters

Justin Donaldson is a PhD candidate at Indiana University School of Informatics, as well as a regular research intern at Strands, Inc. Justin is interested with the analyses and visualizations of social sources of data, such as those that are generated from playlists, blogs, and bookmarks.
Paul Lamere is the Director of Developer Community at The Echo Nest, a research-focused music intelligence startup that provides music information services to developers and partners through a data mining and machine listening platform. Paul is especially interested in hybrid music recommenders and using visualizations to aid music discovery.

Tutorial PM 2 (14:30-17:30): Share and Share Alike, You Can Say Anything about Music in the Web of Data

by Kurt Jacobson (University of London), Yves Raimond (BBC), Gyorgy Fazekas (University of London), and Michael Smethurst (BBC)

The Tutorial Website: http://ismir2009.dbtune.org/ (external link)


Linked Data provides a powerful framework for the expression and re-use of structured data. Recent efforts have brought this powerful framework to bear on the field of music informatics. This tutorial will provide an introduction to Linked Data concepts and how and why they should be used in the context of music-related studies. Using practical examples we will explore what data sets are already available and how they can be used to answer questions about music. We will also explore how signal processing tools and results can be described as structured data. Finally, we will demonstrate tools and best practice for researchers who wish to publish their own data sets on the Semantic Web in a Linked Data fashion.

Biographies of the Presenters

Kurt Jacobson is a PhD candidate at the Centre for Digital Music. As assistant administrator of DBTune.org he has worked to create Semantic Web services for music including a service publishing structured data about music artists on Myspace and musicological data about classical music composers. He is working on modeling and exploring connections in music using structured data from heterogeneous sources including historical musicology, social networks, and audio analysis.
Yves Raimond is a Software Engineer at BBC Audio & Music interactive, after completing a PhD at the Centre for Digital Music, Queen Mary, University of London. He is one of the editors of the Music Ontology specification, and the creator and head administrator of the DBTune.org service, publishing a wide variety of structured music-related data. He is now working on http://www.bbc.co.uk/programmes, publishing a wide range of structured data about BBC programmes. He also maintains the DBTune blog.
Gyorgy Fazekas is a PhD candidate at the Centre for Digital Music. His main research interest includes the development of semantic audio technologies and their application to creative music production. He is working on ontology based information management for audio applications.
Michael Smethurst is an Information Architect at BBC Audio & Music. He is currently working on BBC Programmes, BBC Music and BBC Events, publishing and interlinking data in a number of overlapping domains. He writes on the BBC Radio Labs blog about Linked Data and web publishing.