1,281 research outputs found

    Mid-Air tangible interaction enabled by computer controlled magnetic levitation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 83-86).This thesis presents a concept of mid-air tangible interaction and a system called ZeroN that was developed to enable this interaction. Through this research, I extend the tabletop tangible interaction modalities which have been confined to 2D surfaces into 3D space above the surface. Users are invited to place and move a levitated object in the mid-air space, which is analogous to placing objects on 2D surfaces. For example, users can place a physical object that represents the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. To achieve these interaction scenarios, we developed ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. In doing so, ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. Our technological development includes a magnetic and mechanical control system that can levitate and actuate a permanent magnet in 3D space. This is combined with an optical tracking and display system that projects images on the levitating object. In this thesis, I present interaction techniques and applications developed in the context of this system. Finally, I discuss initial observations and implications, and outline future development and challenges.by Jinha Lee.S.M

    Detailing patient specific modelling to aid clinical decision-making

    Get PDF
    The anatomy of the craniofacial skeleton has been described through the aid of dissection identifying hard and soft tissue structures. Although the macro and microscopic investigation of internal facial tissues have provided invaluable information on constitution of the tissues it is important to inspect and model facial tissues in the living individual. Detailing the form and function of facial tissues will be invaluable in clinical diagnoses and planned corrective surgical interventions such as management of facial palsies and craniofacial disharmony/anomalies. Recent advances in lower-cost, non-invasive imaging and computing power (surface scanning, Cone Beam Computerized Tomography (CBCT) and Magnetic Resonance (MRI)) has enabled the ability to capture and process surface and internal structures to a high resolution. The three-dimensional surface facial capture has enabled characterization of facial features all of which will influence subtleties in facial movement and surgical planning. This chapter will describe the factors that influence facial morphology in terms of gender and age differences, facial movement—surface and underlying structures, modeling based on average structures, orientation of facial muscle fibers, biomechanics of movement—proof of principle and surgical intervention

    Design and recognition of microgestures for always-available input

    Get PDF
    Gestural user interfaces for computing devices most commonly require the user to have at least one hand free to interact with the device, for example, moving a mouse, touching a screen, or performing mid-air gestures. Consequently, users find it difficult to operate computing devices while holding or manipulating everyday objects. This limits the users from interacting with the digital world during a significant portion of their everyday activities, such as, using tools in the kitchen or workshop, carrying items, or workout with sports equipment. This thesis pushes the boundaries towards the bigger goal of enabling always-available input. Microgestures have been recognized for their potential to facilitate direct and subtle interactions. However, it remains an open question how to interact using gestures with computing devices when both of the user’s hands are occupied holding everyday objects. We take a holistic approach and focus on three core contributions: i) To understand end-users preferences, we present an empirical analysis of users’ choice of microgestures when holding objects of diverse geometries. Instead of designing a gesture set for a specific object or geometry and to identify gestures that generalize, this thesis leverages the taxonomy of grasp types established from prior research. ii) We tackle the critical problem of avoiding false activation by introducing a novel gestural input concept that leverages a single-finger movement, which stands out from everyday finger motions during holding and manipulating objects. Through a data-driven approach, we also systematically validate the concept’s robustness with different everyday actions. iii) While full sensor coverage on the user’s hand would allow detailed hand-object interaction, minimal instrumentation is desirable for real-world use. This thesis addresses the problem of identifying sparse sensor layouts. We present the first rapid computational method, along with a GUI-based design tool that enables iterative design based on the designer’s high-level requirements. Furthermore, we demonstrate that minimal form-factor devices, like smart rings, can be used to effectively detect microgestures in hands-free and busy scenarios. Overall, the presented findings will serve as both conceptual and technical foundations for enabling interaction with computing devices wherever and whenever users need them.Benutzerschnittstellen für Computergeräte auf Basis von Gesten erfordern für eine Interaktion meist mindestens eine freie Hand, z.B. um eine Maus zu bewegen, einen Bildschirm zu berühren oder Gesten in der Luft auszuführen. Daher ist es für Nutzer schwierig, Geräte zu bedienen, während sie Gegenstände halten oder manipulieren. Dies schränkt die Interaktion mit der digitalen Welt während eines Großteils ihrer alltäglichen Aktivitäten ein, etwa wenn sie Küchengeräte oder Werkzeug verwenden, Gegenstände tragen oder mit Sportgeräten trainieren. Diese Arbeit erforscht neue Wege in Richtung des größeren Ziels, immer verfügbare Eingaben zu ermöglichen. Das Potential von Mikrogesten für die Erleichterung von direkten und feinen Interaktionen wurde bereits erkannt. Die Frage, wie der Nutzer mit Geräten interagiert, wenn beide Hände mit dem Halten von Gegenständen belegt sind, bleibt jedoch offen. Wir verfolgen einen ganzheitlichen Ansatz und konzentrieren uns auf drei Kernbeiträge: i) Um die Präferenzen der Endnutzer zu verstehen, präsentieren wir eine empirische Analyse der Wahl von Mikrogesten beim Halten von Objekte mit diversen Geometrien. Anstatt einen Satz an Gesten für ein bestimmtes Objekt oder eine bestimmte Geometrie zu entwerfen, nutzt diese Arbeit die aus früheren Forschungen stammenden Taxonomien an Griff-Typen. ii) Wir adressieren das Problem falscher Aktivierungen durch ein neuartiges Eingabekonzept, das die sich von alltäglichen Fingerbewegungen abhebende Bewegung eines einzelnen Fingers nutzt. Durch einen datengesteuerten Ansatz validieren wir zudem systematisch die Robustheit des Konzepts bei diversen alltäglichen Aktionen. iii) Auch wenn eine vollständige Sensorabdeckung an der Hand des Nutzers eine detaillierte Hand-Objekt-Interaktion ermöglichen würde, ist eine minimale Ausstattung für den Einsatz in der realen Welt wünschenswert. Diese Arbeit befasst sich mit der Identifizierung reduzierter Sensoranordnungen. Wir präsentieren die erste, schnelle Berechnungsmethode in einem GUI-basierten Designtool, das iteratives Design basierend auf den Anforderungen des Designers ermöglicht. Wir zeigen zudem, dass Geräte mit minimalem Formfaktor wie smarte Ringe für die Erkennung von Mikrogesten verwendet werden können. Insgesamt dienen die vorgestellten Ergebnisse sowohl als konzeptionelle als auch als technische Grundlage für die Realisierung von Interaktion mit Computergeräten wo und wann immer Nutzer sie benötigen.Bosch Researc

    Cortical mechanisms of seeing and hearing speech

    Get PDF
    In face-to-face communication speech is perceived through eyes and ears. The talker's articulatory gestures are seen and the speech sounds are heard simultaneously. Whilst acoustic speech can be often understood without visual information, viewing articulatory gestures aids hearing substantially in noisy conditions. On the other hand, speech can be understood, to some extent, by solely viewing articulatory gestures (i.e., by speechreading). In this thesis, electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) were utilized to disclose cortical mechanisms of seeing and hearing speech. One of the major challenges of modern cognitive neuroscience is to find out how the brain integrates inputs from different senses. In this thesis, integration of seen and heard speech was investigated using EEG and MEG. Multisensory interactions were found in the sensory-specific cortices at early latencies and in the multisensory regions at late latencies. Viewing other person's actions activate regions belonging to the human mirror neuron system (MNS) which are also activated when subjects themselves perform actions. Possibly, the human MNS enables simulation of other person's actions, which might be important also for speech recognition. In this thesis, it was demonstrated with MEG that seeing speech modulates activity in the mouth region of the primary somatosensory cortex (SI), suggesting that also the SI cortex is involved in simulation of other person's articulatory gestures during speechreading. The question whether there are speech-specific mechanisms in the human brain has been under scientific debate for decades. In this thesis, evidence for the speech-specific neural substrate in the left posterior superior temporal sulcus (STS) was obtained using fMRI. Activity in this region was found to be greater when subjects heard acoustic sine wave speech stimuli as speech than when they heard the same stimuli as non-speech.reviewe

    Airborne Infrared Target Tracking with the Nintendo Wii Remote Sensor

    Get PDF
    Intelligence, surveillance, and reconnaissance unmanned aircraft systems (UAS) are the most common variety of UAS in use today and provide invaluable capabilities to both the military and civil services. Keeping the sensors centered on a point of interest for an extended period of time is a demanding task requiring the full attention and cooperation of the UAS pilot and sensor operator. There is great interest in developing technologies which allow an operator to designate a target and allow the aircraft to automatically maneuver and track the designated target without operator intervention. Presently, the barriers to entry for developing these technologies are high: expertise in aircraft dynamics and control as well as in real- time motion video analysis is required and the cost of the systems required to flight test these technologies is prohibitive. However, if the research intent is purely to develop a vehicle maneuvering controller then it is possible to obviate the video analysis problem entirely. This research presents a solution to the target tracking problem which reliably provides automatic target detection and tracking with low expense and computational overhead by making use of the infrared sensor from a Nintendo Wii Remote Controller

    Music in Virtual Space: Theories and Techniques for Sound Spatialization and Virtual Reality-Based Stage Performance

    Get PDF
    This research explores virtual reality as a medium for live concert performance. I have realized compositions in which the individual performing on stage uses a VR head-mounted display complemented by other performance controllers to explore a composed virtual space. Movements and objects within the space are used to influence and control sound spatialization and diffusion, musical form, and sonic content. Audience members observe this in real-time, watching the performer\u27s journey through the virtual space on a screen while listening to spatialized audio on loudspeakers variable in number and position. The major artistic challenge I will explore through this activity is the relationship between virtual space and musical form. I will also explore and document the technical challenges of this activity, resulting in a shareable software tool called the Multi-source Ambisonic Spatialization Interface (MASI), which is useful in creating a bridge between VR technologies and associated software, ambisonic spatialization techniques, sound synthesis, and audio playback and effects, and establishes a unique workflow for working with sound in virtual space

    An empirical study of embodied music listening, and its applications in mediation technology

    Get PDF
    • …
    corecore