114 research outputs found

    Stabilising touch interactions in cockpits, aerospace, and vibrating environments

    Get PDF
    © Springer International Publishing AG, part of Springer Nature 2018. Incorporating touch screen interaction into cockpit flight systems is increasingly gaining traction given its several potential advantages to design as well as usability to pilots. However, perturbations to the user input are prevalent in such environments due to vibrations, turbulence and high accelerations. This poses particular challenges for interacting with displays in the cockpit, for example, accidental activation during turbulence or high levels of distraction from the primary task of airplane control to accomplish selection tasks. On the other hand, predictive displays have emerged as a solution to minimize the effort as well as cognitive, visual and physical workload associated with using in-vehicle displays under perturbations, induced by road and driving conditions. This technology employs gesture tracking in 3D and potentially eye-gaze as well as other sensory data to substantially facilitate the acquisition (pointing and selection) of an interface component by predicting the item the user intents to select on the display, early in the movements towards the screen. A key aspect is utilising principled Bayesian modelling to incorporate and treat the present perturbation, thus, it is a software-based solution that showed promising results when applied to automotive applications. This paper explores the potential of applying this technology to applications in aerospace and vibrating environments in general and presents design recommendations for such an approach to enhance interactions accuracy as well as safety

    Understanding expressive action

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Also available online at the MIT Theses Online homepage Includes bibliographical references (p. 117-120).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We strain our eyes, cramp our necks, and destroy our hands trying to interact with computer on their terms. At the extreme, we strap on devices and weigh ourselves down with cables trying to re-create a sense of place inside the machine, while cutting ourselves off from the world and people around us. The alternative is to make the real environment responsive to our actions. It is not enough for environments to respond simply to the presence of people or objects: they must also be aware of the subtleties of changing situations. If all the spaces we inhabit are to be responsive, they must not require encumbering devices to be worn and they must be adaptive to changes in the environment and changes of context. This dissertation examines a body of sophisticated perceptual mechanisms developed in response to these needs as well as a selection of human-computer interface sketches designed to push the technology forward and explore the possibilities of this novel interface idiom. Specifically, the formulation of a fully recursive framework for computer vision called DYNA that improves performance of human motion tracking will be examined in depth. The improvement in tracking performance is accomplished with the combination of a three-dimensional, physics-based model of the human body with modifications to the pixel classification algorithms that enable them to take advantage of this high-level knowledge. The result is a novel vision framework that has no completely bottom-up processes, and is therefore significantly faster and more stable than other approaches.by Christopher R. Wren.Ph.D

    Towards perceptual intelligence : statistical modeling of human individual and interactive behaviors

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2000.Includes bibliographical references (p. 279-297).This thesis presents a computational framework for the automatic recognition and prediction of different kinds of human behaviors from video cameras and other sensors, via perceptually intelligent systems that automatically sense and correctly classify human behaviors, by means of Machine Perception and Machine Learning techniques. In the thesis I develop the statistical machine learning algorithms (dynamic graphical models) necessary for detecting and recognizing individual and interactive behaviors. In the case of the interactions two Hidden Markov Models (HMMs) are coupled in a novel architecture called Coupled Hidden Markov Models (CHMMs) that explicitly captures the interactions between them. The algorithms for learning the parameters from data as well as for doing inference with those models are developed and described. Four systems that experimentally evaluate the proposed paradigm are presented: (1) LAFTER, an automatic face detection and tracking system with facial expression recognition; (2) a Tai-Chi gesture recognition system; (3) a pedestrian surveillance system that recognizes typical human to human interactions; (4) and a SmartCar for driver maneuver recognition. These systems capture human behaviors of different nature and increasing complexity: first, isolated, single-user facial expressions, then, two-hand gestures and human-to-human interactions, and finally complex behaviors where human performance is mediated by a machine, more specifically, a car. The metric that is used for quantifying the quality of the behavior models is their accuracy: how well they are able to recognize the behaviors on testing data. Statistical machine learning usually suffers from lack of data for estimating all the parameters in the models. In order to alleviate this problem, synthetically generated data are used to bootstrap the models creating 'prior models' that are further trained using much less real data than otherwise it would be required. The Bayesian nature of the approach let us do so. The predictive power of these models lets us categorize human actions very soon after the beginning of the action. Because of the generic nature of the typical behaviors of each of the implemented systems there is a reason to believe that this approach to modeling human behavior would generalize to other dynamic human-machine systems. This would allow us to recognize automatically people's intended action, and thus build control systems that dynamically adapt to suit the human's purposes better.by Nuria M. Oliver.Ph.D

    Meaning in Animal Communication: Varieties of meaning and their roles in explaining communication

    Get PDF
    Why explain the communicative behaviours of animals by invoking the information/meaning 'transmitted' by signals? Why not explain communication in purely causal/functional terms? This thesis addresses active controversy regarding the nature and role of concepts of information, content and meaning in the scientific explanation of animal communication. I defend the methodology of explaining animal communication by invoking the 'meaning' of signals, and responds to worries raised by sceptics of this methodology in the scientific and philosophical literature. This task involves: showing what facts about communication non-informational methodology leaves unexplained; constructing a well-defined theory of content (or 'natural meaning') for most animal signals; and getting clearer on what cognitive capacities, if any, attributing natural meaning to signals implies for senders and receivers. Second, it weighs into comparative debates on human-nonhuman continuity, arguing that there are, in fact, different notions of meaning applicable to human communication that have different consequences for how continuous key aspects of human communication are with other species

    Proceedings of the 7th Sound and Music Computing Conference

    Get PDF
    Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010

    Proceedings of KogWis 2012. 11th Biannual Conference of the German Cognitive Science Society

    Get PDF
    The German cognitive science conference is an interdisciplinary event where researchers from different disciplines -- mainly from artificial intelligence, cognitive psychology, linguistics, neuroscience, philosophy of mind, and anthropology -- and application areas -- such as eduction, clinical psychology, and human-machine interaction -- bring together different theoretical and methodological perspectives to study the mind. The 11th Biannual Conference of the German Cognitive Science Society took place from September 30 to October 3 2012 at Otto-Friedrich-Universität in Bamberg. The proceedings cover all contributions to this conference, that is, five invited talks, seven invited symposia and two symposia, a satellite symposium, a doctoral symposium, three tutorials, 46 abstracts of talks and 23 poster abstracts

    Why We Fear Genetic Informants: Using Genetic Genealogy to Catch Serial Killers

    Get PDF
    Consumer genetics has exploded, driven by the second-most popular hobby in the United States: genealogy. This hobby has been co-opted by law enforcement to solve cold cases, by linking crime-scene DNA with the DNA of a suspect\u27s relative, which is contained in a direct-to-consumer (DTC) genetic database. The relative’s genetic data acts as a silent witness, or genetic informant, wordlessly guiding law enforcement to a handful of potential suspects. At least thirty murderers and rapists have been arrested in this way, a process which I describe in careful detail in this article. Legal scholars have sounded many alarms, and have called for immediate bans on this methodology, which is referred to as long-range familial searching ( LRFS ) or forensic genetic genealogy ( FGG ). The opponents’ concerns are many, but generally boil down to fears that FGG will invade the privacy and autonomy of presumptively innocent individuals. These concerns, I argue, are considerably overblown. Indeed, many aspects of the methodology implicate nothing new, legally or ethically, and might even better protect privacy while exonerating the innocent. Law enforcement’s use of FGG to solve cold cases is a bogeyman. The real threat to genetic privacy comes from shoddy consumer consent procedures, poor data security standards, and user agreements that permit rampant secondary uses of data. So why do so many legal scholars fear a world where law enforcement uses this methodology? I submit that our fear of so-called genetic informants stems from the sticky and long-standing traps of genetic essentialism and genetic determinism, where we incorrectly attribute intentional action to our genes and fear a world where humans are controlled by our biology. Rather than banning the use of genetic genealogy to catch serial killers and rapists, I call for improved DTC consent processes, and more transparent privacy and security measures. This will better protect genetic privacy in line with consumer expectations, while still permitting the use of LRFS to deliver justice to victims and punish those who commit society\u27s most heinous acts

    Believability Assessment and Modelling in Video Games

    Get PDF
    Artificial Intelligence remains one of the most sought after subjects in computer science to this day. One of its subjects, and the focus of this thesis, is its application to video games as believable agents. This means focusing on implementing agents that behave like us rather than simply attempting to win, whether that means cooperating or competing like we do. Success in building more human-like characters can enhance immersion and enjoyment in games, thus potentially increasing its gameplay value. Ultimately, bringing benefits to the industry and academia. However, believability is a hard concept to define. It depends on how and what one considers to be ``believable'', which is often very subjective. This means that developing believable agents remains a sought out, albeit difficult, challenge. There are many approaches to development ranging from finite state machines or imitation learning to emotional models, with no single solution to creating a human-like agent. This problems remains when attempting to assess these solutions as well. Assessing the believability of agents, characters and simulated actors is also a core challenge for human-like behaviour. While numerous approaches are suggested in the literature, there is not a dominant solution for evaluation either. In addition, assessment rarely receives as much attention as development or modelling do. Mostly, it comes as a necessity of evaluating agents rather than focusing on how its process could affect the outcome of the evaluation itself. This thesis takes a different approach to developing believability and its assessment. For starters, it explores assessment first. In previous years, several researchers have tried to find ways of assessing human-like behaviour in games through adaptations of Turing Tests on their agents. Given the small pool of diversity of the explored parameters in believability assessment and a focus on programming the bots, this thesis starts by exploring different parameters for evaluating believability in video games. The objective of this work is to analyze the different ways believability can be assessed, for humans and non-player characters (NPCs) by comparing how results between them and scores are affected in both when changing the parameters. This thesis also explores the concept of believability and its need in video games in general. Another aspect of assessment explored in this thesis is believability's overall representation. Past research shows methodologies being limited to discrete and low-granularity representations of believable behaviour. This work will focus, for the first time, in viewing believability as a time-continuous phenomenon and explore the suitability of two different affect annotation schemes for its assessment. These techniques are also compared to previously used discrete methodologies, to understand how moment-to-moment assessment can contribute to these. In addition, this thesis studies the degree to which we can predict character believability in a continuous fashion. This is achieved by training random forest models to predict believability based on annotations of the context extracted from a game. It is then that this thesis tackles development. For this work, different solutions are combined into one and in a different order: this time-continuous data based on peoples' assessment of believability is modelled and integrated into a game agent to affect its behaviour. This results in a final comparison between two agents, where one uses a believability biased model and the other does not. Showing that biasing agents' behaviour with assessment data can increase their overall believability

    Proceedings of the 17th Annual Conference of the European Association for Machine Translation

    Get PDF
    Proceedings of the 17th Annual Conference of the European Association for Machine Translation (EAMT
    • …
    corecore