ABSTRACT
Optical Music Recognition (OMR) promises to make large collections of sheet music searchable by their musical content. It would open up novel ways of accessing the vast amount of written music that has never been recorded before. For a long time, OMR was not living up to that promise, as its performance was simply not good enough, especially on handwritten music or under non-ideal image conditions. However, OMR has recently seen a number of improvements, mainly due to the advances in machine learning. In this work, we take an OMR system based on the traditional pipeline and an end-to-end system, which represent the current state of the art, and illustrate in proof-of-concept experiments their applicability in retrieval settings. We also provide an example of a musicological study that can be replicated with OMR outputs at much lower costs. Taken together, this indicates that in some settings, current OMR can be used as a general tool for enriching digital libraries.
- Andrew Hankinson, John Ashley Burgoyne, Gabriel Vigliensoni, Alastair Porter, Jessica Thompson, Wendy Liu, Remi Chiu, and Ichiro Fujinaga. 2012. Digital Document Image Retrieval Using Optical Music Recognition. In Proceedings of the 13th International Society for Music Information Retrieval Conference, Fabien Gouyon, Perfecto Herrera, Luis Gustavo Martins, and Meinard Müller (Eds.). 577--582.Google Scholar
- Stefan Balke, Sanu Pulimootil Achankunju, and Meinard Müller. 2015. Matching Musical Themes based on noisy OCR and OMR input, In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2015-August (2015), 703--707.Google ScholarCross Ref
- Jorge Calvo-Zaragoza and David Rizo. 2018. Camera-PrIMuS: Neural End-to-End Optical Music Recognition on Realistic Monophonic Scores. In 19th International Society for Music Information Retrieval Conference. (in press).Google Scholar
- Jorge Calvo-Zaragoza and David Rizo. 2018. End-to-End Neural Optical Music Recognition of Monophonic Scores. Applied Sciences 4 (2018).Google Scholar
- L.iang Chen and Kun Duan. 2016. MIDI-assisted egocentric optical music recognition, In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 2016 IEEE Winter Conference on Applications of Computer Vision, WACV 2016 (2016). cited By 0; Conference of IEEE Winter Conference on Applications of Computer Vision, WACV 2016 ; Conference Date: 7 March 2016 Through 10 March 2016; Conference Code:121834.Google Scholar
- G. Sayeed Choudhury, M. Droetboom, Tim DiLauro, Ichiro Fujinaga, and Brian Harrington. 2000. Optical Music Recognition System within a Large-Scale Digitization Project. In 1st International Symposium on Music Information Retrieval. 1--6.Google Scholar
- Jürgen Diet and Frank Kurth. 2007. The Probado Music Repository at the Bavarian State Library.. In ISMIR. 501--504.Google Scholar
- Matthew J. Dovey. 2004. Overview of the OMRAS project: Online music retrieval and searching. Journal of the American Society for Information Science and Technology 55, 12 (2004), 1100--1107. Google ScholarDigital Library
- Alicia Fornés, Anjan Dutta, Albert Gordo, and Josep Lladós. 2012. CVC-MUSCIMA: a ground truth of handwritten music score images for writer identification and staff removal. International Journal on Document Analysis and Recognition (IJDAR) 15, 3 (2012), 243--251. Google ScholarDigital Library
- C. Fremerey, D. Damm, F. Kurth, and M. Clausen. 2009. Handling Scanned Sheet Music and Audio Recordings in Digital Music Libraries. In Proceedings of the International Conference on Acoustics NAG/DAGA. 1--2.Google Scholar
- Christian Fremerey, Meinard Müller, Frank Kurth, and Michael Clausen. 2008. Automatic mapping of scanned sheet music to audio recordings. In Proceedings of the 9th International Society for Music Information Retrieval Conference (ISMIR 2008). 413--418. http://ismir2008.ismir.net/papers/ISMIR2008_116.pdfGoogle Scholar
- Ichiro Fujinaga, Andrew Hankinson, and Julie E. Cumming. 2014. Introduction to SIMSSA (Single Interface for Music Score Searching and Analysis). In Proceedings of the 1st International Workshop on Digital Libraries for Musicology. ACM, 1--3. Google ScholarDigital Library
- David Garfinkle, Claire Arthur, Peter Schubert, Julie Cumming, and Ichiro Fujinaga. 2017. PatternFinder: Content-Based Music Retrieval with Music21. In Proceedings of the 4th International Workshop on Digital Libraries for Musicology (DLfM '17). ACM, New York, NY, USA, 5--8. Google ScholarDigital Library
- Joe George and Lior Shamir. 2014. Computer analysis of similarities between albums in popular music. Pattern Recognition Letters 45 (2014), 78--84.Google ScholarCross Ref
- Jan jr. Hajič and Pavel Pecina. 2017. The MUSCIMA++ Dataset for Handwritten Optical Music Recognition. Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (2017).Google Scholar
- Jan Hajič jr., Matthias Dorfer, Gerhard Widmer, and Pavel Pecina. 2018. Towards Full-Pipeline Handwritten OMR with Musical Symbol Detection by U-Nets. In 19th International Society for Music Information Retrieval Conference. (in press).Google Scholar
- Alan Marsden. 2012. Interrogating Melodic Similarity: A Definitive Phenomenon or the Product of Interpretation? Journal of New Music Research 41, 4 (2012), 323--335.Google ScholarCross Ref
- Matthias Dorfer, Andreas Arzt, and Gerhard Widmer. 2017. Learning Audio-Sheet Music Correspondences for Score Identification and Offline Alignment. In Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017, Suzhou, China, October 23--27, 2017, Sally Jo Cunningham, Zhiyao Duan, Xiao Hu, and Douglas Turnbull (Eds.). 115--122. https://ismir2017.smcnus.org/wp-content/uploads/2017/10/32_Paper.pdfGoogle Scholar
- Alexander Pacha and Horst Eidenberger. 2017. Towards Self-Learning Optical Music Recognition. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). 795--800.Google Scholar
- Ana Rebelo, Ichiro Fujinaga, Filipe Paszkiewicz, Andre R.S. Marcal, Carlos Guedes, and Jaime S. Cardoso. 2012. Optical music recognition: state-of-the-art and open issues. International Journal of Multimedia Information Retrieval 1, 3 (March 2012), 173--190.Google ScholarCross Ref
- David Rizo. 2010. Symbolic music comparison with tree data structures. Ph.D. Dissertation. Universidad de Alicante.Google Scholar
- Jessica Thompson, Andrew Hankinson, and Ichiro Fujinaga. 2011. Searching the Liber Usualis: Using CouchDB and ElasticSearch to Query Graphical Music Documents. In Proceedings of the 12th International Society for Music Information Retrieval Conference. http://ismir2011.ismir.net/latebreaking/LB-10.pdfGoogle Scholar
- P. Toiviainen and T. Eerola. 2016. MIDI toolbox 1.1. https://github.com/miditoolbox/. (2016).Google Scholar
- Julián Urbano. 2013. MIREX 2013 Symbolic Melodic Similarity: A Geometric Model supported with Hybrid Sequence Alignment. Technical Report. Music Information Retrieval Evaluation eXchange.Google Scholar
- Piet G. Vos and Jim M. Troost. 1989. Ascending and Descending Melodic Intervals: Statistical Findings and their Perceptual Relevance. Music Perception 6, 4 (1989), 383--396.Google ScholarCross Ref
Index Terms
- How current optical music recognition systems are becoming useful for digital libraries
Recommendations
Understanding Optical Music Recognition
For over 50 years, researchers have been trying to teach computers to read music notation, referred to as Optical Music Recognition (OMR). However, this field is still difficult to access for new researchers, especially those without a significant ...
The Digital Music Lab: A Big Data Infrastructure for Digital Musicology
Special Issue on Digital Infrastructure for Cultural Heritage, Part 1In musicology and music research generally, the increasing availability of digital music, storage capacities, and computing power enable and require new and intelligent systems. In the transition from traditional to digital musicology, many techniques ...
Improving mood classification in music digital libraries by combining lyrics and audio
JCDL '10: Proceedings of the 10th annual joint conference on Digital librariesMood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and ...
Comments