4,367 research outputs found

    The ixiQuarks: merging code and GUI in one creative space

    Get PDF
    This paper reports on ixiQuarks; an environment of instruments and effects that is built on top of the audio programming language SuperCollider. The rationale of these instruments is to explore alternative ways of designing musical interaction in screen-based software, and investigate how semiotics in interface design affects the musical output. The ixiQuarks are part of external libraries available to SuperCollider through the Quarks system. They are software instruments based on a non- realist design ideology that rejects the simulation of acoustic instruments or music hardware and focuses on experimentation at the level of musical interaction. In this environment we try to merge the graphical with the textual in the same instruments, allowing the user to reprogram and change parts of them in runtime. After a short introduction to SuperCollider and the Quark system, we will describe the ixiQuarks and the philosophical basis of their design. We conclude by looking at how they can be seen as epistemic tools that influence the musician in a complex hermeneutic circle of interpretation and signification

    An interactive music system based on the technology of the reacTable

    Get PDF
    The purpose of this dissertation is to investigate and document a research project undertaken in the designing, constructing and performing of an interactive music system. The project involved building a multi-user electro-acoustic music instrument with a tangible user interface, based on the technology of the reacTable. The main concept of the instrument was to integrate the ideas of 1) interpreting gestural movement into music, 2) multi-touch/multi-user technology, and 3) the exploration of timbre in computer music. The dissertation discusses the definition, basics and essentials of interactive music systems and examines the past history and key features of the three main concepts, previously mentioned. The original instrument is observed in detail, including the design and construction of the table-shaped physical build, along with an in-depth look into the computer software (ReacTIVision, Max MSP and Reason) employed. The fundamentals and workings of the instrument- sensing/processing/response, control and feedback, and mapping- are described at length, examining how tangible objects are used to generate and control parameters of music, while its instrumental limitations are also mentioned. How the three main concepts relate to, and are expressed within, the instrument is also discussed. An original piece of music, with an accompanying video, entitled Piece for homemade reacTable, composed and performed on the instrument has been created in support of this dissertation. It acts as a basic demonstration of how the interactive music system works, showcasing all the main concepts and how they are put in practice to create and perform new electronic music

    Expanding the Human Bandwidth Through Subvocalization and Other Methods

    Get PDF
    This is a look at human bandwidth and how it applies to human-human interaction and human-computer interaction. The paper discusses what human bandwidth means and what must be done to try to expand it. Current methods of expanding bandwidth are discussed. The methods include detection of subvocal activity, facial expression detection, eye tracking, emotion detection in digital music, pen based musical input systems, and augmented reality. After explaining these methods, the paper focuses on using some of the technologies together to give an idea of what the future of interaction with computers might look like. These proposed ideas include emotion based music, various uses for augmented reality, and composing music with the mind

    Interaction Design for Digital Musical Instruments

    Get PDF
    The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model. Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon: 1. The accepted design conventions of the hardware in use 2. Established musical systems, acoustic or digital 3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware. This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician

    Audio-Based Visualization of Expressive Body Movements in Music Performance: An Evaluation of Methodology in Three Electroacoustic Compositions

    Get PDF
    An increase in collaboration amongst visual artists, performance artists, musicians, and programmers has given rise to the exploration of multimedia performance arts. A methodology for audio-based visualization has been created that integrates the information of sound with the visualization of physical expressions, with the goal of magnifying the expressiveness of the performance. The emphasis is placed on exalting the music by using the audio to affect and enhance the video processing, while the video does not affect the audio at all. In this sense the music is considered to be autonomous of the video. The audio-based visualization can provide the audience with a deeper appreciation of the music. Unique implementations of the methodology have been created for three compositions. A qualitative analysis of each implementation is employed to evaluate both the technological and aesthetic merits for each composition

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Human Factors Considerations in System Design

    Get PDF
    Human factors considerations in systems design was examined. Human factors in automated command and control, in the efficiency of the human computer interface and system effectiveness are outlined. The following topics are discussed: human factors aspects of control room design; design of interactive systems; human computer dialogue, interaction tasks and techniques; guidelines on ergonomic aspects of control rooms and highly automated environments; system engineering for control by humans; conceptual models of information processing; information display and interaction in real time environments

    The role of speech technology in biometrics, forensics and man-machine interface

    Get PDF
    Day by day Optimism is growing that in the near future our society will witness the Man-Machine Interface (MMI) using voice technology. Computer manufacturers are building voice recognition sub-systems in their new product lines. Although, speech technology based MMI technique is widely used before, needs to gather and apply the deep knowledge of spoken language and performance during the electronic machine-based interaction. Biometric recognition refers to a system that is able to identify individuals based on their own behavior and biological characteristics. Fingerprint success in forensic science and law enforcement applications with growing concerns relating to border control, banking access fraud, machine access control and IT security, there has been great interest in the use of fingerprints and other biological symptoms for the automatic recognition. It is not surprising to see that the application of biometric systems is playing an important role in all areas of our society. Biometric applications include access to smartphone security, mobile payment, the international border, national citizen register and reserve facilities. The use of MMI by speech technology, which includes automated speech/speaker recognition and natural language processing, has the significant impact on all existing businesses based on personal computer applications. With the help of powerful and affordable microprocessors and artificial intelligence algorithms, the human being can talk to the machine to drive and control all computer-based applications. Today's applications show a small preview of a rich future for MMI based on voice technology, which will ultimately replace the keyboard and mouse with the microphone for easy access and make the machine more intelligent
    • …
    corecore