46 research outputs found
Artech 2008: proceedings of the 4th International Conference on Digital Arts
ARTECH 2008 is the fourth international conference held in Portugal and Galicia on the topic of Digital Arts. It aims to promote contacts between Iberian and International contributors concerned with the conception, production and dissemination of Digital and Electronic Art. ARTECH brings the scientific, technological and artistic community together, promoting the interest in the digital culture and its intersection with art and technology as an important research field, a common space for discussion, an exchange of experiences, a forum for emerging digital artists and a way of understanding and appreciating new forms of cultural expression. Hosted by the Portuguese Catholic Universityâs School of Arts (UCP-EA) at the City of Porto, ARTCH 2008 falls in alignment with the main commitment of the Research Center for Science and Technology of the Arts (CITAR) to promote knowledge in the field of the Arts trough research and development within UCP-AE and together with the local and international community. The main areas proposed for the conference were related with sound, image, video, music, multimedia and other new media related topics, in the context of emerging practice of artistic creation. Although non exclusive, the main topics of the conference are usually: Art and Science; Audio-Visual and Multimedia Design; Creativity Theory; Electronic Music; Generative and Algorithmic Art; Interactive Systems for Artistic Applications; Media Art history; Mobile Multimedia; Net Art and Digital Culture; New Experiences with New Media and New Applications; Tangible and Gesture Interfaces; Technology in Art Education; Virtual Reality and Augmented Reality. The contribution from the international community was extremely gratifying, resulting in the submission of 79 original works (Long Papers, Short Papers and installation proposals) from 22 Countries. Our Scientific Committee reviewed these submissions thoroughly resulting in a 73% acceptance ratio of a diverse and promising body of work presented in this book of proceedings. This compilation of articles provides an overview of the state of the art as well as a glimpse of new tendencies in the field of Digital Arts, with special emphasis in the topics: Sound and Music Computing; Technology Mediated Dance; Collaborative Art Performance; Digital Narratives; Media Art and Creativity Theory; Interactive Art; Audiovisual and Multimedia Design.info:eu-repo/semantics/publishedVersio
Interaction design for live performance
PhD Thesis
Multimedia item accompanying this thesis to be consulted at Robinson LibraryThe role of interactive technology in live performance has increased substantially in recent years. Practices and experiences of existing forms of live performance have been transformed and new genres of technology-Ââmediated live performance have emerged in response to novel technological opportunities. Consequently, designing for live performance is set to become an increasingly important concern for interaction design researchers and practitioners. However, designing interactive technology for live performance is a challenging activity, as the experiences of both performers and their audiences are shaped and influenced by a number of delicate and interconnected issues, which relate to different forms and individual practices of live performance in varied and often conflicting ways. The research presented in this thesis explores how interaction designers might be better supported in engaging with this intricate and multifaceted design space. This is achieved using a practice-Ââled methodology, which involves the researcherâs participation in both the investigation of, and design response to, issues of live performance as they are embodied in the lived and felt experiences of individual live performersâ practices during three interaction design case studies. This research contributes to the field of interaction design for live performance in three core areas. Understandings of the relationships between key issues of live performance and individual performersâ lived and felt experiences are developed, approaches to support interaction designers in engaging individual live performersâ lived and felt experiences in design are proposed and innovative interfaces and interaction techniques for live performance are designed. It is anticipated that these research outcomes will prove directly applicable or inspiring to the practices of interaction designers wishing to address live performance and will contribute to the ongoing academic discourse around the experience of, and design for, live performance.Engineering and Physical Sciences Research Council
Recommended from our members
Proceedings of the 1st International Conference on Live Coding
Open Access peer reviewed papers on live coding published at the 1st International Conference on Live Coding (ICLC) in Leeds
Proceedings of the 7th Sound and Music Computing Conference
Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
Apps, Agents, and Improvisation: Ensemble Interaction with Touch-Screen Digital Musical Instruments
This thesis concerns the making and performing of music with new
digital musical instruments (DMIs) designed for ensemble
performance. While computer music has advanced to the point where
a huge variety of digital instruments are common in educational,
recreational, and professional music-making, these instruments
rarely seek to enhance the ensemble context in which they are
used. Interaction models that map individual gestures to sound
have been previously studied, but the interactions of ensembles
within these models are not well understood. In this research,
new ensemble-focussed instruments have been designed and deployed
in an ongoing artistic practice. These instruments have also been
evaluated to find out whether, and if so how, they affect the
ensembles and music that is made with them.
Throughout this thesis, six ensemble-focussed DMIs are introduced
for mobile touch-screen computers. A series of improvised
rehearsals and performances leads to the identification of a
vocabulary of continuous performative touch-gestures and a system
for tracking these collaborative performances in real time using
tools from machine learning. The tracking system is posed as an
intelligent agent that can continually analyse the gestural
states of performers, and trigger a response in the performers'
user interfaces at appropriate moments. The hypothesis is that
the agent interaction and UI response can enhance improvised
performances, allowing performers to better explore creative
interactions with each other, produce better music, and have a
more enjoyable experience.
Two formal studies are described where participants rate their
perceptions of improvised performances with a variety of designs
for agent-app interaction. The first, with three expert
performers, informed refinements for a set of apps. The most
successful interface was redesigned and investigated further in a
second study with 16 non-expert participants. In the final
interface, each performer freely improvised with a limited number
of notes; at moments of peak gestural change, the agent presented
users with the opportunity to try different notes. This interface
is shown to produce performances that are longer, as well as
demonstrate improved perceptions of musical structure, group
interaction, enjoyment and overall quality.
Overall, this research examined ensemble DMI performance in
unprecedented scope and detail, with more than 150 interaction
sessions recorded. Informed by the results of lab and field
studies using quantitative and qualitative methods, four
generations of ensemble-focussed interface have been developed
and refined. The results of the most recent studies assure us
that the intelligent agent interaction does enhance improvised
performances
Paralinguistic vocal control of interactive media: how untapped elements of voice might enhance the role of non-speech voice input in the user's experience of multimedia.
Much interactive media development, especially commercial development, implies the dominance of the visual modality, with sound as a limited supporting channel. The development of multimedia technologies such as augmented reality and virtual reality has further revealed a distinct partiality to visual media. Sound, however, and particularly voice, have many aspects which have yet to be adequately investigated. Exploration of these aspects may show that sound can, in some respects, be superior to graphics in creating immersive and expressive interactive experiences. With this in mind, this thesis investigates the use of non-speech voice characteristics as a complementary input mechanism in controlling multimedia applications. It presents a number of projects that employ the paralinguistic elements of voice as input to interactive media including both screen-based and physical systems. These projects are used as a means of exploring the factors that seem likely to affect usersâ preferences and interaction patterns during non-speech voice control. This exploration forms the basis for an examination of potential roles for paralinguistic voice input. The research includes the conceptual and practical development of the projects and a set of evaluative studies. The work submitted for Ph.D. comprises practical projects (50 percent) and a written dissertation (50 percent). The thesis aims to advance understanding of how voice can be used both on its own and in combination with other input mechanisms in controlling multimedia applications. It offers a step forward in the attempts to integrate the paralinguistic components of voice as a complementary input mode to speech input applications in order to create a synergistic combination that might let the strengths of each mode overcome the weaknesses of the other