The research implements interactive music processes involving sound synthesis and symbolic
treatments within a single environment.
The algorithms are driven by classical instrumental performance through hybrid systems called
hyperinstruments, in which the sensing of the performance gestures leads to open and goal-oriented
generative music forms.
The interactions are composed with MAX/Msp, designing contexts and relationships between
real-time instrumental timbre analysis (sometimes with added inertial motion tracking) with a
gesture-based idea of form shaping. Physical classical instruments are treated as interfaces, giving
rise to the need to develop unconventional mapping strategies on account of the multi-dimensional
and interconnecting quality of timbre.
Performance and sound gestures are viewed as salient energies, phrasings and articulations carrying
information about human intentions, in this way becoming able to change the musical behaviour of
a composition inside a coded dramaturgy. The interactive networks are designed in order to
integrate traditional music practices and “languages” with computational systems designed to be
self-regulating, through the mediation of timbre space and performance gestural descriptions.
Following its classic definition, technology aims to be mainly related not to mechanical practices
but rather to rhetorical approaches: for this reason the software often foresees interactive scores, and
must be performed in accordance with a set of external verbal (and video) explanations, whose
technical detail should nevertheless not impair the most intuitive approach to music making