Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revisionBoth sides next revision
research_topics [2016/04/22 18:42] – created mtlocalresearch_topics [2019/01/11 21:20] – [Acoustic Research, Modeling, and Sound Synthesis] gary
Line 44: Line 44:
 **Dr. Gary P. Scavone** **Dr. Gary P. Scavone**
  
-The shape and design of most acoustic music instruments, refined and advanced by craftsmen through empirical, "trial and error" methods, have changed little over the past century. The study of the acoustic phenomena underlying the operation of these instruments, however, is a relatively young science. My research is focused on, but not limited to, woodwind music instruments and includes:+The shape and design of most acoustic music instruments, refined and advanced by craftsmen through empirical, "trial and error" methods, have changed little over the past century. The study of the acoustic phenomena underlying the operation of these instruments, however, is a relatively young science. My research is focused on, but not limited to, woodwind and string music instruments and includes:
  
-    * measurements and analyses to gain a theoretical understanding of the fundamental acoustic behavior of music instruments and other sounding objects; +    * measurements and analyses to gain a theoretical understanding of the fundamental acoustic or vibrational behavior of music instruments and other sounding objects; 
-    * the development of computer-based mathematical models that implement these acoustic principles as accurately as possible and which can subsequently be used to study variations in instrument design;+    * the development of computer-based mathematical models that implement these acoustic or mechanical principles as accurately as possible and which can subsequently be used to study variations in instrument design;
     * the creation of efficient, real-time synthesis algorithms capable of producing convincing instrument sounds;     * the creation of efficient, real-time synthesis algorithms capable of producing convincing instrument sounds;
     * the design of appropriate human-computer interfaces for use in controlling and interacting with real-time synthesis models.     * the design of appropriate human-computer interfaces for use in controlling and interacting with real-time synthesis models.
Line 53: Line 53:
 Recent acoustic analyses have focused on [[http://www.music.mcgill.ca/~gary/vti/|vocal-tract influence in woodwind instrument performance]] and fluid-structure interactions in wind instrument systems.  My acoustic modeling work is concerned with the characterization of the various interdependent components of a music instrument system, such as the mouthpiece, air column, and toneholes of a clarinet. This approach is commonly referred to as "physical modeling". A discrete-time technique called "digital waveguide synthesis" is often used to efficiently and accurately implement these acoustic models. Recent synthesis developments have been focused on aspects of woodwind instrument toneholes, conical air columns, vocal tract influences, and reed/mouthpiece interactions. Several human-computer interfaces have been developed in the course of this research for the purposes of experimenting and performing with the real-time synthesis models. Recent acoustic analyses have focused on [[http://www.music.mcgill.ca/~gary/vti/|vocal-tract influence in woodwind instrument performance]] and fluid-structure interactions in wind instrument systems.  My acoustic modeling work is concerned with the characterization of the various interdependent components of a music instrument system, such as the mouthpiece, air column, and toneholes of a clarinet. This approach is commonly referred to as "physical modeling". A discrete-time technique called "digital waveguide synthesis" is often used to efficiently and accurately implement these acoustic models. Recent synthesis developments have been focused on aspects of woodwind instrument toneholes, conical air columns, vocal tract influences, and reed/mouthpiece interactions. Several human-computer interfaces have been developed in the course of this research for the purposes of experimenting and performing with the real-time synthesis models.
  
-Acoustic and psychoacoustic experiments also play a role in this research. In many instances, acoustic theory must be validated by experimental measurements. Psychoacoustic studies can aid in the development of efficient and convincing synthesis models by helping identify acoustic features of a system which have less perceptual importance for human listeners.+Acoustic and perceptual experiments also play a role in this research. In many instances, acoustic theory must be validated by experimental measurements. Perceptual studies can aid in the development of efficient and convincing synthesis models by helping identify acoustic features of a system which have less perceptual importance for human listeners.
  
 To support the design and implementation of real-time synthesis models, a software synthesis environment called the [[http://ccrma.stanford.edu/software/stk/|Synthesis ToolKit in C++ (STK)]] has been developed in collaboration with Perry Cook at Princeton University. STK is a set of open source audio signal processing and algorithmic synthesis classes written in C++. The ToolKit was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, real-time control, ease of use, and educational example code. To support the design and implementation of real-time synthesis models, a software synthesis environment called the [[http://ccrma.stanford.edu/software/stk/|Synthesis ToolKit in C++ (STK)]] has been developed in collaboration with Perry Cook at Princeton University. STK is a set of open source audio signal processing and algorithmic synthesis classes written in C++. The ToolKit was designed to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, real-time control, ease of use, and educational example code.
Line 63: Line 63:
 **Dr. Stephen McAdams** **Dr. Stephen McAdams**
  
-My research goal is to understand how listeners mentally organize a complex musical scene into sources, events, sequences, and musical structures. In my laboratory we use techniques of digital signal processing, mechanics, psychophysics, cognitive psychology, and cognitive neuroscience.+My research goal is to understand how listeners mentally organize a complex musical scene into sources, events, sequences, and musical structures. In my laboratorywe use techniques of digital signal processing, psychophysics, computational modeling of auditory processing, cognitive psychology, and music analysis.
  
-The origin of music is in sound-producing objects. We seek to understand how listeners perceive the events produced by such objects in terms of the mechanical nature of the objects and the ways objects interact (impact, friction, blowing) to set them in vibration: a new field that I have dubbed "psychomechanicssince we try to quantify the relation between the properties of mechanical objects and perception of the events they produce. An understanding of the minimal sound cues that allow us to identify sources and events is important for sound synthesis technologies included in virtual reality technologies, for example.+The origin of music is in sound-producing objects. We seek to understand how listeners perceive the events produced by such objects in terms of the mechanical nature of the objects and the ways objects interact to set them in vibration (impact, friction, blowing): a new field that I have dubbed psychomechanics” since we try to quantify the relation between the properties of mechanical objects and perception of the events they produce.
  
-One of the most mysterious of musical properties of sound events, very closely related to source properties, is their timbre. Timbre is a whole set of dimensions of musical perception such as brightness, roughness, attack quality, richness, inharmonicity, and so on. We try to understand how this pallette of attributes is organized perceptually and how it depends on both the acoustic properties of sound events and on the context in which they occur. We are also interested in how timbre can be used as an integral part of musical discourse through orchestration or sound synthesis and processing.+One of the most mysterious of musical properties of sound events, very closely related to source properties, is their timbre. Timbre is a whole set of dimensions of musical perception such as brightness, roughness, attack quality, richness, inharmonicity, and so on. We try to understand how this palette of attributes is organized perceptuallyhow it depends on both the acoustic properties of sound events and on the context in which they occur, and how the timbres of events and sequences are committed to memory. 
 +In music, many sound sources are often playing at the same time, which means that the listener must organize the musical scene into events and sequences that carry musical information about the behaviour of those sound sources (a musical instrument playing a melody, for example). However, composers can play with sound in ways that make a listener hear several sources as one (blending sounds), or with sound synthesis to make a single sound split into several (sound segregation)
  
-In music, many such (musical) objects are often playing at the same time, which means that the listener must organize the musical scene into events and sequences that carry musical information about the behavior of sound sources (a musical instrument playing a melody, for example). However, composers can play with sound in ways that make a listener hear several sources as one (blending sounds), or with sound synthesis to make a single sound split into several (sound segregation). Musical scene analysis is affected not only by what we hear but also what we see. We will thus be extending work on auditory scene analysis to multimodal scene analysis more generally. +Music happens in time and the ephemeral world of temporal experience is another concern in my laboratory. We are interested in how cognitive processes such as attention, memory, recognition, and structural processing, as well as more emotional and aesthetic experience of music, take place in time and are related to musical structure. We have developed and employed various techniques for measuring and analyzing continuous responses during music listening in live concert settings to probe the cognitive dynamics of musical experience.
- +
-Music happens in time and the ephemeral world of temporal experience is another concern in my laboratory. We are interested in how cognitive processes such as attention, memory, recognition, and structural processing, as well as more emotional and esthetic experience of music, take place in time and are related to musical structure. We have developed and employed various techniques for measuring and analyzing continuous responses during music listening in live concert settings to probe the cognitive dynamics of musical experience. +
  
 +We are also interested in how timbre can be used as an integral part of musical discourse through orchestration. Studies involving analysis of scores and writings about orchestration, audio analysis, and perceptual analysis will contribute to the grounding of elements of a theory of orchestration in perception and cognition.
 ====== Gestural Control of Sound Synthesis ====== ====== Gestural Control of Sound Synthesis ======
  
research_topics.txt · Last modified: 2023/09/19 19:57 by 127.0.0.1
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0