134 research outputs found

    METHOD TO EMULATE THE L-BAND DIGITAL AERONAUTICAL COMMUNICATION SYSTEM FOR SESAR EVALUATION AND VERIFICATION

    Get PDF
    The VHF voice communication system currently used for air traffic control is experiencing increasing capacity problems. The “L-band Digital Aeronautical Communications System” is an upcoming technology providing an aeronautical datalink outside of the VHF band. The objective of this paper is to develop a method to emulate its communication performance. We developed a formal model of the system and implemented it on the basis of the dummynet network emulation tools. This implementation was deployed in a networking appliance and measured in a test-bed network. The results indicate that the emulator provides the performance predicted by simulations and is suitable to evaluate and verify protocols and applications envisioned to utilize this datalink

    ON THE INTERPLAY BETWEEN BRAIN-COMPUTER INTERFACES AND MACHINE LEARNING ALGORITHMS: A SYSTEMS PERSPECTIVE

    Get PDF
    Today, computer algorithms use traditional human-computer interfaces (e.g., keyboard, mouse, gestures, etc.), to interact with and extend human capabilities across all knowledge domains, allowing them to make complex decisions underpinned by massive datasets and machine learning. Machine learning has seen remarkable success in the past decade in obtaining deep insights and recognizing unknown patterns in complex data sets, in part by emulating how the brain performs certain computations. As we increase our understanding of the human brain, brain-computer interfaces can benefit from the power of machine learning, both as an underlying model of how the brain performs computations and as a tool for processing high-dimensional brain recordings. The technology (machine learning) has come full circle and is being applied back to understanding the brain and any electric residues of the brain activity over the scalp (EEG). Similarly, domains such as natural language processing, machine translation, and scene understanding remain beyond the scope of true machine learning algorithms and require human participation to be solved. In this work, we investigate the interplay between brain-computer interfaces and machine learning through the lens of end-user usability. Specifically, we propose the systems and algorithms to enable synergistic and user-friendly integration between computers (machine learning) and the human brain (brain-computer interfaces). In this context, we provide our research contributions in two interrelated aspects by, (i) applying machine learning to solve challenges with EEG-based BCIs, and (ii) enabling human-assisted machine learning with EEG-based human input and implicit feedback.Ph.D

    Not All Gestures Are Created Equal: Gesture and Visual Feedback in Interaction Spaces.

    Full text link
    As multi-touch mobile computing devices and open-air gesture sensing technology become increasingly commoditized and affordable, they are also becoming more widely adopted. It became necessary to create new interaction design specifically for gesture-based interfaces to meet the growing needs of users. However, a deeper understanding of the interplay between gesture, and visual and sonic output is needed to make meaningful advances in design. This thesis addresses this crucial step in development by investigating the interrelation between gesture-based input, and visual representation and feedback, in gesture-driven creative computing. This thesis underscores the importance that not all gestures are created equal, and there are multiple factors that affect their performance. For example, a drag gesture in visual programming scenario performs differently than in a target acquisition task. The work presented here (i) examines the role of visual representation and mapping in gesture input, (ii) quantifies user performance differences in gesture input to examine the effect of multiple factors on gesture interactions, and (iii) develops tools and platforms for exploring visual representations of gestures. A range of gesture spaces and scenarios, from continuous sound control with open-air gestures to mobile visual programming with discrete gesture-driven commands, was assessed. Findings from this thesis reveals a rich space of complex interrelations between gesture input and visual feedback and representations. The contributions of this thesis also includes the development of an augmented musical keyboard with 3-D continuous gesture input and projected visualization, as well as a touch-driven visual programming environment for interactively constructing dynamic interfaces. These designs were evaluated by a series of user studies in which gesture-to-sound mapping was found to have a significant affect on user performance, along with other factors such as the selection of visual representation and device size. A number of counter-intuitive findings point to the potentially complex interactions between factors such as device size, task and scenarios, which exposes the need for further research. For example, the size of the device was found to have contradictory effects in two different scenarios. Furthermore, this work presents a multi-touch gestural environment to support the prototyping of gesture interactions.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113456/1/yangqi_1.pd

    Specifying and Verifying Requirements for Transmission of Medical Data in Public Networks

    Get PDF
    The purpose of this study is to identify and verify the necessary Quality of Service metrics in public networks for an existing remote patient monitoring system called Mobile Viewers. Therefore, any abnormalities can be detected beforehand and the application quality can be seen from the end user's point of view. The name Mobile Viewers refer to three different client applications: Web Viewers, Pocket Viewers and Cellular Viewers. The literature part of this thesis reviews the former research studies dedicated to network performance measurements in 3G, 2.5G and Wireless LAN networks. Based on the review, the most suitable measurement methods, tools, metrics and environments are selected to be utilised during this study. In the first part of the thesis work, passive live measurement tests are executed within UMTS, GPRS, LAN and Wireless LAN networks in order to find out the delay, jitter and packet loss metrics for the individual Mobile Viewers. As a result, GPRS presents the highest delay, jitter and packet loss values leading to poor application quality. The second part of the thesis study focuses on identifying the quality requirements for Mobile Viewers. Initially, a network emulator tool is employed to emulate the necessary delay, jitter and packet loss metrics in order to test the application quality under different network conditions. Additional subjective user defined tests are executed to assess the quality for each viewer client. Finally, the limit delay, packet loss and jitter values, where the application quality starts to degrade, are presented. Additional future work may be carried out by observing the Mobile Viewers' performances with higher technologies for instance, HSDPA. Furthermore, the conclusions derived from the analysis of the measurements and the proposed requirements for Mobile Viewers should be validated by additional experiments with different client devices, measurement tools and longer measurement periods

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Beyond key velocity: Continuous sensing for expressive control on the Hammond Organ and Digital keyboards

    Get PDF
    In this thesis we seek to explore the potential for continuous key position to be used as an expressive control in keyboard musical instruments, and how preexisting skills can be adapted to leverage this additional control. Interaction between performer and sound generation on a keyboard instrument is often restricted to a number of discrete events on the keys themselves (notes onsets and offsets), while complementary continuous control is provided via additional interfaces, such as pedals, modulation wheels and knobs. The rich vocabulary of gestures that skilled performers can achieve on the keyboard is therefore often simplified to a single, discrete velocity measurement. A limited number of acoustical and electromechanical keyboard instruments do, however, present affordances of continuous key control, so that the role of the key is not limited to delivering discrete events, but its instantaneous position is, to a certain extent, an element of expressive control. Recent evolutions in sensing technologies allow to leverage continuous key position as an expressive element in the sound generation of digital keyboard musical instruments. We start by exploring the expression available on the keys of the Hammond organ, where nine contacts are closed at different points of the key throw for each key onset and we find that the velocity and the percussiveness of the touch affect the way the contacts close and bounce, producing audible differences in the onset transient of each note. We develop an embedded hardware and software environment for low-latency sound generation controlled by continuous key position, which we use to create two digital keyboard instruments. The first of these emulates the sound of a Hammond and can be controlled with continuous key position, so that it allows for arbitrary mapping between the key position and the nine virtual contacts of the digital sound generator. A study with 10 musicians shows that, when exploring the instrument on their own, the players can appreciate the differences between different settings and tend to develop a personal preference for one of them. In the second instrument, continuous key position is the fundamental means of expression: percussiveness, key position and multi-key gestures control the parameters of a physical model of a flute. In a study with 6 professional musicians playing this instrument we gather insights on the adaptation process, the limitations of the interface and the transferability of traditional keyboard playing techniques

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Emulation of haptic feedback for manual interfaces

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1996.Includes bibliographical references (p. 329-339).by Karon E. MacLean.Ph.D

    Music conducting pedagogy and technology : a document analysis on best practices

    Get PDF
    This document analysis was designed to investigate pedagogical practices of music conducting teachers in conjunction with research of technologists on the use of various technologies as teaching tools. I sought to discern how conducting teachers and pedagogues are applying recent technological advancements into their teaching strategies. I also sought to understand what paths research is taking about the use of software, hardware, and computer systems applied to the teaching of music conducting technique. This dissertation was guided by four main research questions: (1) How has technology been used to aid in the teaching of conducting? (2) What is the role of technology in the context of conducting pedagogy? (3) Given that conducting is a performative act, how can it be developed through technological means? (4) What technological possibilities exist in the teaching of music conducting technique? Data were collected through music conducting syllabi, conducting textbooks, and research articles. Documents were selected through purposive sampling procedures. Analysis of documents through the constant comparative approach identified emerging themes and differences across the three types of documents. Based on a synthesis of information, I discussed implications for conducting pedagogy and made suggestions for conducting educators.Includes bibliographical references
    • 

    corecore