Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
graduate_seminars [2016/04/22 18:44] – created mtlocalgraduate_seminars [2023/09/19 19:57] (current) – external edit 127.0.0.1
Line 25: Line 25:
 Most digital sound synthesis methods and audio processing techniques are based on the spectral representation of sound signals. This seminar starts with a theoretical and practical study of spectral representation, spectral analysis, and spectral modification of sound signals. Digital sound synthesis and sound processing techniques are then presented as specific spectral modelling or alterations from which their capabilities, properties, and limitations are deduced. The techniques that are explored in this context include the phase-vocoder, additive synthesis, source-filter synthesis, distortion synthesis and processing, waveguide synthesis, and reverberation. Available Computer Music software and ad hoc patches are used as examples and illustrations. Although the emphasis is given on basic principles rather than details of implementation, a full command of Max/MSP is required for the assignments.  Most digital sound synthesis methods and audio processing techniques are based on the spectral representation of sound signals. This seminar starts with a theoretical and practical study of spectral representation, spectral analysis, and spectral modification of sound signals. Digital sound synthesis and sound processing techniques are then presented as specific spectral modelling or alterations from which their capabilities, properties, and limitations are deduced. The techniques that are explored in this context include the phase-vocoder, additive synthesis, source-filter synthesis, distortion synthesis and processing, waveguide synthesis, and reverberation. Available Computer Music software and ad hoc patches are used as examples and illustrations. Although the emphasis is given on basic principles rather than details of implementation, a full command of Max/MSP is required for the assignments. 
  
-===== MUMT 610: Interactive Music Systems ===== 
- 
-**CRN: 5347** 
- 
-**Kojiro Umezaki and Bruce Pennycook** 
- 
-This seminar explores topics central to the research areas of machine listening and machine composition from technical and aesthetic perspectives. 
- 
-Techniques including rule-based, symbolic processing and network-based, sub-symbolic processing will be considered. These techniques will be applied to musical representations based on pre-transcribed data (e.g. MIDI) and direct processing of audio data all with a focus on viable realtime implementations. 
- 
-Machine listening topics will include beat tracking, meter induction, key induction, score following, segmentation, and pattern processing. Machine composition topics will focus on fundamental algorithmic techniques and aesthetic issues. 
- 
-Five special presentations will be given by Bruce Pennycook on the following topics: 
- 
-  * Principles and aesthetics of interactive music 
-  * Impact of interactive music on performance practice 
-  * Impact of interactive music on compositional methods 
-  * Systems design (composer/performer perspective) 
-  * New directions and possibilities: music and audio visualization 
- 
-There will be one examination based on the principal readings, one paper, an individual presentation, and a final project. The final project will be a musical example based on ideas explored in the paper and typically implemented in C/C++ or MaxMSP.  
  
 ===== MUMT 616: Timbre as a Form-Bearing Element in Music ===== ===== MUMT 616: Timbre as a Form-Bearing Element in Music =====
Line 92: Line 71:
 ===== MUMT 621: Music Information Acquisition, Preservation, and Retrieval ===== ===== MUMT 621: Music Information Acquisition, Preservation, and Retrieval =====
  
-** Ichiro Fujinaha**+** Ichiro Fujinaga**
  
 This seminar will investigate the current research activities in the area of music information acquisition, preservation, and retrieval. The goal is discovering ways to efficiently find, store, and retrieve musical information. Although the field is relatively new, it encompasses various music disciplines including music analysis, music education, music history, music theory, music psychology, and audio signal processing. This seminar will investigate the current research activities in the area of music information acquisition, preservation, and retrieval. The goal is discovering ways to efficiently find, store, and retrieve musical information. Although the field is relatively new, it encompasses various music disciplines including music analysis, music education, music history, music theory, music psychology, and audio signal processing.
Line 104: Line 83:
 This seminar presents current research trends in time-frequency representations and parametric modeling in the context of music and audio applications. A specific focus is made on the analysis of sounds using parametric methods. Students should have prior knowledge of sound analysis and resynthesis techniques and of digital signal processing. This seminar presents current research trends in time-frequency representations and parametric modeling in the context of music and audio applications. A specific focus is made on the analysis of sounds using parametric methods. Students should have prior knowledge of sound analysis and resynthesis techniques and of digital signal processing.
  
- 
- 
-===== MUGS 695: Special Topic Seminar: Digital Musical Instruments - Technology, Performance and Composition ===== 
- 
-** Profs Sean Ferguson, Marcelo Wanderley ** 
- 
-Until the beginning of the 20th century, the design of musical instruments relied upon mechanical systems and acoustical properties of tubes, strings, and membranes. With the advent of electricity, luthiers were able to experiment with the new possibilities offered by electrical and electronic means. A whole different set of possibilities became available to instrument designers, including new ways to generate sound and to design control surfaces of any arbitrary shape. 
- 
-This course will focus on systems that use the computer as the sound-generating device, a choice that offers the flexibility of a general-purpose architecture able to implement different synthesis techniques. An instrument that uses computer-generated sound is known as a digital musical instrument and consists of a control surface driving in real-time the parameters of a synthesis algorithm implemented in the computer. The synthesis parameters are controlled using input devices, or gestural controllers, that may eventually track any type of movement or gesture, thereby allowing far more control possibilities than those offered by the standard piano-like interface. 
- 
-As a consequence, new digital musical instruments do not necessarily bear any resemblance to existing acoustic instruments. In this context, a number of questions arise: How does one play or compose for these new instruments? Can digital musical instruments become as viable as those on which we are accustomed to performing? What is the role of virtuosity in such contexts? Will a repertoire ever be built for these instruments? What is the balance between technological obsolescence and technical mastery? 
- 
-This course will deal with the various issues relating to new digital musical instruments in the following three areas: a) Music Technology b) Performance and c) Composition. Course work will consist of collaborative projects involving students in Composition, Performance and Music Technology.  
  
 ===== MUMT 609: Music, Media and Technology Project ===== ===== MUMT 609: Music, Media and Technology Project =====
graduate_seminars.1461350698.txt.gz · Last modified: 2023/09/19 19:57 (external edit)
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0