ABSTRACT
Music holds a significant cultural role in social identity and in the encouragement of socialization. Technology, by the destruction of physical and cultural distance, has lead to many changes in musical themes and the complete loss of forms. Yet, it also allows for the preservation and distribution of music from societies without a history of written sheet music. This paper presents early work on a tool for musicians and ethnomusicologists to transcribe sheet music from monophonic voiced pieces for preservation and distribution. Using FFT, the system detects the pitch frequencies, also other methods detect note durations tempo, time signatures and generates sheet music. The final system is able to be used in mobile platforms allowing the user to take recordings and produce sheet music in situ to a performance.
- S. Abdallah, A. Alencar-Brayner, E. Benetos, S. Cottrell, J. Dykes, N. Gold, A. Kachkaev, M. Mahey, D. Tidhar, A. Tovell, et al. Automatic transcription and pitch analysis of the british library world & traditional music collections. In Proc of the 5th International Workshop on Folk Music Analysis-FMA2015, Paris, France, pages 10--12, 2015.Google Scholar
- A. Askenfelt. Automatic notation of played music (status report), 1976, speech transmission laboratory, quarterly progress attd status report, 1, royal institute of technology. 1976.Google Scholar
- B. K. A.T. Cemgil and D. Barber. Generative model based polyphonic music transcription. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2003. Google ScholarCross Ref
- S. M. Bendor D. Time domain extraction of vibrato from monophonic instruments. In Proceedings of the 1st International Conference on Music Information Retrieval, 2000.Google Scholar
- E. Benetos and S. Dixon. A shift-invariant latent variable model for automatic music transcription. Computer Music Journal, 36(4):81--94, 2012. Google ScholarDigital Library
- E. Benetos, S. Dixon, D. Giannoulis, H. Kirchhoff, and A. Klapuri. Automatic music transcription: challenges and future directions. J. Intell. Inf. Syst., 41(3):407--434, 2013. Google ScholarDigital Library
- E. Benetos and A. Holzapfel. Automatic transcription of turkish microtonal music. The Journal of the Acoustical Society of America, 138(4):2118--2130, 2015. Google ScholarCross Ref
- G. Brown and M. Cooke. Perceptual grouping of musical sounds - a computational model. J. of New Music Research, 1994.Google Scholar
- J. C. Brown and B. Zhang. Musical frequency tracking using the methods of conventional and "narrowed" autocorrelation. The Journal of the Acoustical Society of America, 89(5):2346--2354, 1991. Google ScholarCross Ref
- M. Buccoli, M. Zanoni, F. Setragno, F. Antonacci, and A. Sarti. An unsupervised approach to the semantic description of the sound quality of violins. In 23rd European Signal Processing Conference, EUSIPCO 2015, Nice, France, August 31 -- September 4, 2015, pages 2004--2008, 2015. Google ScholarCross Ref
- A. Cemgil and B. Kappen. Monte carlo methods for tempo tracking and rhythm quantization. J. Artificial Intelligence Research, 2003. Google ScholarDigital Library
- J. Chafe. Techniques for note identification in polyphonic music. In Proceedings of the International Conference on Acoustics Speech and Signal Processing, 1986.Google Scholar
- R. Chafe, Mont-Reynaud. Aesthetic response to choral music: Response comparisons of performer participants and non-performer respondents. Computer Music Journal, 6(1), 1982.Google Scholar
- N. Cook. Perception: a perspective from music theory. Musical Perceptions, 1994.Google Scholar
- J. B. Correa. Towards the automated analysis of simple polyphonic music: A knowledge based approach. 2003.Google Scholar
- M. Davies. Audio source separation. In Mathematics in Signal Processing V, 2001.Google Scholar
- S. Dixon. Classification of dance music by periodicity pattern. ISMIR, 2003.Google Scholar
- A. Eronen and A. Klapuri. Musical instrument recognition using cepstral coefficients and temporal features. ICASSP, 2000. Google ScholarDigital Library
- W. Fucks. Mathematical analysis of formal structure of music. IRE Trans. on Information Theory, 1962. Google ScholarCross Ref
- M. S. Giuliano Monti. Monophonic transcription with autocorrelation. Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-00), 2000.Google Scholar
- S. Godsill and M. Davy. Bayesian harmonic models for musical signal analysis. PhD thesis, 7th Valencia Int. Meeting on Bayesian Statistics, 2002.Google Scholar
- A. Holzapfel and E. Benetos. The sousta corpus: Beat-informed automatic transcription of traditional dance tunes. In Proceedings of ISMIR - International Conference on Music Information Retrieval, 2016.Google Scholar
- M. S. Juan Pablo Bello, Giuliano Monti. Techniques for automatic music transcription. ISMIR, 2000.Google Scholar
- M. K. Sound-Source recognition. PhD thesis, MIT, 1999.Google Scholar
- R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82(1):35--45, 1960. Google ScholarCross Ref
- I. Katayose. The kansei music system. Computer Music Journal, 13(4):7277, 1999.Google Scholar
- J. A. Moorer. On the Segmentation and Analysis of Continuous Musical Sound by Digital COMPUTER. Ph.D. thesis, Stanford University, 1975. Google ScholarDigital Library
- W. Pielemeier, G. Wakefield, and M. Simoni. Time-frequency analysis of musical signals. Proceedings of the IEEE, 84(9):1216--1230, 1996. Google ScholarCross Ref
- M. Piszczalski and B. A. Galler. Computer analysis and transcription of performed music: A project report. Computers and the Humanities, 13(3):195--206, 1979. Google ScholarCross Ref
- M. Scirea, J. Togelius, P. Eklund, and S. Risi. Metacompose: A compositional evolutionary music composer. In Evolutionary and Biologically Inspired Music, Sound, Art and Design, pages 202--217. Springer, 2016. Google ScholarDigital Library
- S. W. Smith. The Scientist and Engineer's Guide to Digital Signal Processing. California Technical Publishing, San Diego, CA, 1999. Google ScholarDigital Library
- A. Sterian. Model-based Segmentation of Time-frequency Images for Musical Transcription. PhD thesis, University of Michigan, Ann Arbor, 1999.Google Scholar
- P. Walmsley. Signal Separation of Musical Instruments - Simulation-based Methods for Musical Signal Decomposition and Transcription. PhD thesis, Cambridge University Engineering Department, 2000.Google Scholar
- D. Wilde. Optimum seeking methods. Prentice-Hall, 1964.Google Scholar
Index Terms
- MOMOS-MT: mobile monophonic system for music transcription: sheet music generation on mobile devices
Recommendations
An Algorithmic Approach to Automated Symbolic Transcription of Hindustani Vocals
DLfM '23: Proceedings of the 10th International Conference on Digital Libraries for MusicologyAlthough a sizable body of digital music scholarship has focused on automatic transcription, it has almost exclusively been applied to Western music. In this paper, we outline an algorithm to automate the transcription of vocal performances of ...
Automatic note transcription system for Hindustani classical music
In Hindustani classical music, notes and their different variations play an important role to arouse the aesthetic qualities of a rãga. Therefore, detection of notes is very much important to find out the different characteristics of a rãga, but the ...
Intelligent Music Interfaces
IUI '18: Proceedings of the 23rd International Conference on Intelligent User InterfacesAutomatic music-understanding technologies (automatic analysis of music signals) make possible the creation of intelligent music interfaces that enrich music experiences and open up new ways of listening to music. In the past, it was common to listen to ...
Comments