222 research outputs found

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    Panoramic Augmented Reality for Persistence of Information in Counterinsurgency Environments (PARPICE)

    Get PDF
    Modern Counter-Insurgency (COIN) and Irregular Warfare (IW) are increasingly complex. Contributing to this complexity is the need to develop and maintain a mental map of relevant environmental and historical factors and their interactions, generated from disparate sources of information that must be organized, processed and integrated. Compounding this challenge is the fact that mental pictures cannot easily be passed from one soldier to the next. This is a problem when the tactical situation dictates frequent changes in unit Areas of Operations (AOs), and particularly in cases where units rotate on a regular basis. When units hand over an AO, the incoming unit must quickly rebuild a mental picture and narrative of its operating environment. Because of this, historical organizational knowledge is lost that could otherwise increase combat effectiveness and reduce casualties. This thesis discusses a prototype architecture for a system that will enable a vehicle crew commander to spatially input, organize and view fused tactical information through placement of 3D interactive symbols directly into the real-life on-site scene from the vehicle perspective. A panoramic camera, dashboard monitor and head tracker give the commander a complete view of the vehicle surroundings for improved situational awareness, and a 360-degree LiDAR scanner supplies depth information for accurate annotation geo-location. This system is intended to generate greater situational understanding of the complex environment present in COIN operations, in order to allow greater performance and survivability of the vehicle crew. Such a system, if fielded, can create the ability to add numerous other capabilities to the combat vehicle crew.http://archive.org/details/panoramicaugment109455057JIEDDO; HQDA G-8 CAAUS Army (USA) authorApproved for public release; distribution is unlimited

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles

    Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS 1994), volume 1

    Get PDF
    The AIAA/NASA Conference on Intelligent Robotics in Field, Factory, Service, and Space (CIRFFSS '94) was originally proposed because of the strong belief that America's problems of global economic competitiveness and job creation and preservation can partly be solved by the use of intelligent robotics, which are also required for human space exploration missions. Individual sessions addressed nuclear industry, agile manufacturing, security/building monitoring, on-orbit applications, vision and sensing technologies, situated control and low-level control, robotic systems architecture, environmental restoration and waste management, robotic remanufacturing, and healthcare applications

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Augmented Reality Interfaces for Procedural Tasks

    Get PDF
    Procedural tasks involve people performing established sequences of activities while interacting with objects in the physical environment to accomplish particular goals. These tasks span almost all aspects of human life and vary greatly in their complexity. For some simple tasks, little cognitive assistance is required beyond an initial learning session in which a person follows one-time compact directions, or even intuition, to master a sequence of activities. In the case of complex tasks, procedural assistance may be continually required, even for the most experienced users. Approaches for rendering this assistance employ a wide range of written, audible, and computer-based technologies. This dissertation explores an approach in which procedural task assistance is rendered using augmented reality. Augmented reality integrates virtual content with a user's natural view of the environment, combining real and virtual objects interactively, and aligning them with each other. Our thesis is that an augmented reality interface can allow individuals to perform procedural tasks more quickly while exerting less effort and making fewer errors than other forms of assistance. This thesis is supported by several significant contributions yielded during the exploration of the following research themes: What aspects of AR are applicable and beneficial to the procedural task problem? In answering this question, we developed two prototype AR interfaces that improve procedural task accomplishment. The first prototype was designed to assist mechanics carrying out maintenance procedures under field conditions. An evaluation involving professional mechanics showed our prototype reduced the time required to locate procedural tasks and resulted in fewer head movements while transitioning between tasks. Following up on this work, we constructed another prototype that focuses on providing assistance in the underexplored psychomotor phases of procedural tasks. This prototype presents dynamic and prescriptive forms of instruction and was evaluated using a demanding and realistic alignment task. This evaluation revealed that the AR prototype allowed participants to complete the alignment more quickly and accurately than when using an enhanced version of currently employed documentation systems. How does the user interact with an AR application assisting with procedural tasks? The application of AR to the procedural task problem poses unique user interaction challenges. To meet these challenges, we present and evaluate a novel class of user interfaces that leverage naturally occurring and otherwise unused affordances in the native environment to provide a tangible user interface for augmented reality applications. This class of techniques, which we call Opportunistic Controls, combines hand gestures, overlaid virtual widgets, and passive haptics to form an interface that was proven effective and intuitive during quantitative evaluation. Our evaluation of these techniques includes a qualitative exploration of various preferences and heuristics for Opportunistic Control-based designs

    Human factors issues in telerobotic decommissioning of legacy nuclear facilities

    Get PDF
    This thesis investigates the problems of enabling human workers to control remote robots, to achieve decommissioning of contaminated nuclear facilities, which are hazardous for human workers to enter. The mainstream robotics literature predominantly reports novel mechanisms and novel control algorithms. In contrast, this thesis proposes experimental methodologies for objectively evaluating the performance of both a robot and its remote human operator, when challenged with carrying out industrially relevant remote manipulation tasks. Initial experiments use a variety of metrics to evaluate the performance of human test-subjects. Results show that: conventional telemanipulation is extremely slow and difficult; metrics for usability of such technology can be conflicting and hard to interpret; aptitude for telemanipulation varies significantly between individuals; however such aptitude may be rendered predictable by using simple spatial awareness tests. Additional experiments suggest that autonomous robotics methods (e.g. vision-guided grasping) can significantly assist the operator. A novel approach to telemanipulation is proposed, in which an ``orbital camera`` enables the human operator to select arbitrary views of the scene, with the robot's motions transformed into the orbital view coordinate frame. This approach is useful for overcoming the severe depth perception problems of conventional fixed camera views. Finally, a novel computer vision algorithm is proposed for target tracking. Such an algorithm could be used to enable an unmanned aerial vehicle (UAV) to fixate on part of the workspace, e.g. a manipulated object, to provide the proposed orbital camera view

    Personal imaging

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts & Sciences, 1997.Includes bibliographical references (p. 217-223).In this thesis, I propose a new synergy between humans and computers, called "Humanistic Intelligence" (HI), and provide a precise definition of this new form of human-computer interaction. I then present a means and apparatus for reducing this principle to practice. The bulk of this thesis concentrates on a specific embodiment of this invention, called Personal Imaging, most notably, a system which I show attains new levels of creativity in photography, defines a new genre of documentary video, and goes beyond digital photography/video to define a new renaissance in imaging, based on simple principles of projective geometry combined with linearity and superposition properties of light. I first present a mathematical theory of imaging which allows the apparatus to measure, to within a single unknown constant, the quantity of light arriving from each direction, to a fixed point in space, using a collection of images taken from a sensor array having a possibly unknown nonlinearity. Within the context of personal imaging, this theory is a contribution in and of itself (in the sense that it was an unsolved problem previously), but when also combined with the proposed apparatus, it allows one to construct environment maps by simply looking around. I then present a new form of connected humanistic intelligence in which individuals can communicate, across boundaries of time and space, using shared environment maps, and the resulting computer-mediated reality that arises out of long-term adaptation in a personal imaging environment. Finally, I present a new philosophical framework for cultural criticism which arises out of a new concept called 'humanistic property'. This new philosophical framework has two axes, a 'reflectionist' axis and a 'diffusionist' axis. In particular, I apply the new framework to personal imaging, thus completing a body of work that lies at the intersection of art, science, and technology.by Steve Mann.Ph.D

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private Audiokanäle anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme während dem Pendeln oder zum freihändigen Telefonieren. Dank diesem eindeutigen primären Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stärker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese Geräte sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die Funktionalität von Kopfhörern zu erweitern. Die räumliche Nähe von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform für die Erfassung einer Vielzahl von Eigenschaften, Prozessen und Aktivitäten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollständig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche Sensorikansätze erforscht werden, welche die Erkennung von bisher unzugänglichen Phänomenen ermöglichen. Durch die Einführung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher Fähigkeiten zu etablieren. Um eine fundierte Grundlage für die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-Phänomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und Aktivität, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesünderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. Darüber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher Aktivitätserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige Ansätze. Darüber hinaus wird ein Regressionsmodell eingeführt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten für Forschung, welche sich nahtlos in das tägliche Leben integrieren lässt und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lässt, um neuartige Phänomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusätzliches interaktives Medium eingesetzt, welches eine freihändige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mündet die Dissertation in einer offenen Hard- und Software-Plattform für Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die für verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die Einstiegshürden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trägt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. Darüber hinaus trägt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen für Earables bei. Durch diese Forschung schließt die Dissertation die Lücke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. Darüber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten Geräte zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher Sensorfähigkeiten bietet, um Eigenschaften, Prozesse und Aktivitäten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen Fähigkeiten wird somit zunehmend realer
    corecore