212 research outputs found

    Ambient Intelligence for Next-Generation AR

    Full text link
    Next-generation augmented reality (AR) promises a high degree of context-awareness - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of ambient IoT devices, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the Springer Handbook of the Metavers

    Impact of Ear Occlusion on In-Ear Sounds Generated by Intra-oral Behaviors

    Get PDF
    We conducted a case study with one volunteer and a recording setup to detect sounds induced by the actions: jaw clenching, tooth grinding, reading, eating, and drinking. The setup consisted of two in-ear microphones, where the left ear was semi-occluded with a commercially available earpiece and the right ear was occluded with a mouldable silicon ear piece. Investigations in the time and frequency domains demonstrated that for behaviors such as eating, tooth grinding, and reading, sounds could be recorded with both sensors. For jaw clenching, however, occluding the ear with a mouldable piece was necessary to enable its detection. This can be attributed to the fact that the mouldable ear piece sealed the ear canal and isolated it from the environment, resulting in a detectable change in pressure. In conclusion, our work suggests that detecting behaviors such as eating, grinding, reading with a semi-occluded ear is possible, whereas, behaviors such as clenching require the complete occlusion of the ear if the activity should be easily detectable. Nevertheless, the latter approach may limit real-world applicability because it hinders the hearing capabilities.</p

    Combining omnidirectional vision with polarization vision for robot navigation

    Get PDF
    La polarisation est le phénomène qui décrit les orientations des oscillations des ondes lumineuses qui sont limitées en direction. La lumière polarisée est largement utilisée dans le règne animal,à partir de la recherche de nourriture, la défense et la communication et la navigation. Le chapitre (1) aborde brièvement certains aspects importants de la polarisation et explique notre problématique de recherche. Nous visons à utiliser un capteur polarimétrique-catadioptrique car il existe de nombreuses applications qui peuvent bénéficier d'une telle combinaison en vision par ordinateur et en robotique, en particulier pour l'estimation d'attitude et les applications de navigation. Le chapitre (2) couvre essentiellement l'état de l'art de l'estimation d'attitude basée sur la vision.Quand la lumière non-polarisée du soleil pénètre dans l'atmosphère, l'air entraine une diffusion de Rayleigh, et la lumière devient partiellement linéairement polarisée. Le chapitre (3) présente les motifs de polarisation de la lumière naturelle et couvre l'état de l'art des méthodes d'acquisition des motifs de polarisation de la lumière naturelle utilisant des capteurs omnidirectionnels (par exemple fisheye et capteurs catadioptriques). Nous expliquons également les caractéristiques de polarisation de la lumière naturelle et donnons une nouvelle dérivation théorique de son angle de polarisation.Notre objectif est d'obtenir une vue omnidirectionnelle à 360 associée aux caractéristiques de polarisation. Pour ce faire, ce travail est basé sur des capteurs catadioptriques qui sont composées de surfaces réfléchissantes et de lentilles. Généralement, la surface réfléchissante est métallique et donc l'état de polarisation de la lumière incidente, qui est le plus souvent partiellement linéairement polarisée, est modifiée pour être polarisée elliptiquement après réflexion. A partir de la mesure de l'état de polarisation de la lumière réfléchie, nous voulons obtenir l'état de polarisation incident. Le chapitre (4) propose une nouvelle méthode pour mesurer les paramètres de polarisation de la lumière en utilisant un capteur catadioptrique. La possibilité de mesurer le vecteur de Stokes du rayon incident est démontré à partir de trois composants du vecteur de Stokes du rayon réfléchi sur les quatre existants.Lorsque les motifs de polarisation incidents sont disponibles, les angles zénithal et azimutal du soleil peuvent être directement estimés à l'aide de ces modèles. Le chapitre (5) traite de l'orientation et de la navigation de robot basées sur la polarisation et différents algorithmes sont proposés pour estimer ces angles dans ce chapitre. A notre connaissance, l'angle zénithal du soleil est pour la première fois estimé dans ce travail à partir des schémas de polarisation incidents. Nous proposons également d'estimer l'orientation d'un véhicule à partir de ces motifs de polarisation.Enfin, le travail est conclu et les possibles perspectives de recherche sont discutées dans le chapitre (6). D'autres exemples de schémas de polarisation de la lumière naturelle, leur calibrage et des applications sont proposées en annexe (B).Notre travail pourrait ouvrir un accès au monde de la vision polarimétrique omnidirectionnelle en plus des approches conventionnelles. Cela inclut l'orientation bio-inspirée des robots, des applications de navigation, ou bien la localisation en plein air pour laquelle les motifs de polarisation de la lumière naturelle associés à l'orientation du soleil à une heure précise peuvent aboutir à la localisation géographique d'un véhiculePolarization is the phenomenon that describes the oscillations orientations of the light waves which are restricted in direction. Polarized light has multiple uses in the animal kingdom ranging from foraging, defense and communication to orientation and navigation. Chapter (1) briefly covers some important aspects of polarization and explains our research problem. We are aiming to use a polarimetric-catadioptric sensor since there are many applications which can benefit from such combination in computer vision and robotics specially robot orientation (attitude estimation) and navigation applications. Chapter (2) mainly covers the state of art of visual based attitude estimation.As the unpolarized sunlight enters the Earth s atmosphere, it is Rayleigh-scattered by air, and it becomes partially linearly polarized. This skylight polarization provides a signi cant clue to understanding the environment. Its state conveys the information for obtaining the sun orientation. Robot navigation, sensor planning, and many other applications may bene t from using this navigation clue. Chapter (3) covers the state of art in capturing the skylight polarization patterns using omnidirectional sensors (e.g fisheye and catadioptric sensors). It also explains the skylight polarization characteristics and gives a new theoretical derivation of the skylight angle of polarization pattern. Our aim is to obtain an omnidirectional 360 view combined with polarization characteristics. Hence, this work is based on catadioptric sensors which are composed of reflective surfaces and lenses. Usually the reflective surface is metallic and hence the incident skylight polarization state, which is mostly partially linearly polarized, is changed to be elliptically polarized after reflection. Given the measured reflected polarization state, we want to obtain the incident polarization state. Chapter (4) proposes a method to measure the light polarization parameters using a catadioptric sensor. The possibility to measure the incident Stokes is proved given three Stokes out of the four reflected Stokes. Once the incident polarization patterns are available, the solar angles can be directly estimated using these patterns. Chapter (5) discusses polarization based robot orientation and navigation and proposes new algorithms to estimate these solar angles where, to the best of our knowledge, the sun zenith angle is firstly estimated in this work given these incident polarization patterns. We also propose to estimate any vehicle orientation given these polarization patterns. Finally the work is concluded and possible future research directions are discussed in chapter (6). More examples of skylight polarization patterns, their calibration, and the proposed applications are given in appendix (B). Our work may pave the way to move from the conventional polarization vision world to the omnidirectional one. It enables bio-inspired robot orientation and navigation applications and possible outdoor localization based on the skylight polarization patterns where given the solar angles at a certain date and instant of time may infer the current vehicle geographical location.DIJON-BU Doc.électronique (212319901) / SudocSudocFranceF

    Smart HMI for an autonomous vehicle

    Get PDF
    El presente trabajo expone la arquitectura diseñada para la implementación de un HMI (Human Machine Interface) en un vehículo autónomo desarrollado en la Universidad de Alcalá. Este sistema hace uso del ecosistema ROS (Robot Operating System) para la comunicación entre los diferentes modulos desarrollados en el vehículo. Además se expone la creación de una herramienta de captación de datos de conductores haciendo uso de la mirada de este, basada en OpenFace, una herramienta de código libre para análisis de caras. Para ello se han desarrollado dos métodos, uno basado en un método lineal y otro usando técnicas del algoritmo NARMAX. Se han desarrollado diferentes test para demostrar la precisión de ambos métodos y han sido evaluados en el dataset de accidentes DADA2000.This works presents the framework that composed the HMI (Human Machine Interface) built in an autonomous vehicle from University of Alcalá. This system has been developed using the framework ROS (Robot Operating System) for the communication between the different sub-modules developed on the vehicle. Also, a system to obtain gaze focalization data from drivers using a camera is presented, based on OpenFace, which is an open source tool for face analysis. Two different methods are proposed, one linear and other based on NARMAX algorithm. Different test has been done in order to prove their accuracy and they have been evaluated on the challenging dataset DADA2000, which is composed by traffic accidents.Máster Universitario en Ingeniería Industrial (M141

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints

    Synaptic Learning for Neuromorphic Vision - Processing Address Events with Spiking Neural Networks

    Get PDF
    Das Gehirn übertrifft herkömmliche Computerarchitekturen in Bezug auf Energieeffizienz, Robustheit und Anpassungsfähigkeit. Diese Aspekte sind auch für neue Technologien wichtig. Es lohnt sich daher, zu untersuchen, welche biologischen Prozesse das Gehirn zu Berechnungen befähigen und wie sie in Silizium umgesetzt werden können. Um sich davon inspirieren zu lassen, wie das Gehirn Berechnungen durchführt, ist ein Paradigmenwechsel im Vergleich zu herkömmlichen Computerarchitekturen erforderlich. Tatsächlich besteht das Gehirn aus Nervenzellen, Neuronen genannt, die über Synapsen miteinander verbunden sind und selbstorganisierte Netzwerke bilden. Neuronen und Synapsen sind komplexe dynamische Systeme, die durch biochemische und elektrische Reaktionen gesteuert werden. Infolgedessen können sie ihre Berechnungen nur auf lokale Informationen stützen. Zusätzlich kommunizieren Neuronen untereinander mit kurzen elektrischen Impulsen, den so genannten Spikes, die sich über Synapsen bewegen. Computational Neuroscientists versuchen, diese Berechnungen mit spikenden neuronalen Netzen zu modellieren. Wenn sie auf dedizierter neuromorpher Hardware implementiert werden, können spikende neuronale Netze wie das Gehirn schnelle, energieeffiziente Berechnungen durchführen. Bis vor kurzem waren die Vorteile dieser Technologie aufgrund des Mangels an funktionellen Methoden zur Programmierung von spikenden neuronalen Netzen begrenzt. Lernen ist ein Paradigma für die Programmierung von spikenden neuronalen Netzen, bei dem sich Neuronen selbst zu funktionalen Netzen organisieren. Wie im Gehirn basiert das Lernen in neuromorpher Hardware auf synaptischer Plastizität. Synaptische Plastizitätsregeln charakterisieren Gewichtsaktualisierungen im Hinblick auf Informationen, die lokal an der Synapse anliegen. Das Lernen geschieht also kontinuierlich und online, während sensorischer Input in das Netzwerk gestreamt wird. Herkömmliche tiefe neuronale Netze werden üblicherweise durch Gradientenabstieg trainiert. Die durch die biologische Lerndynamik auferlegten Einschränkungen verhindern jedoch die Verwendung der konventionellen Backpropagation zur Berechnung der Gradienten. Beispielsweise behindern kontinuierliche Aktualisierungen den synchronen Wechsel zwischen Vorwärts- und Rückwärtsphasen. Darüber hinaus verhindern Gedächtnisbeschränkungen, dass die Geschichte der neuronalen Aktivität im Neuron gespeichert wird, so dass Verfahren wie Backpropagation-Through-Time nicht möglich sind. Neuartige Lösungen für diese Probleme wurden von Computational Neuroscientists innerhalb des Zeitrahmens dieser Arbeit vorgeschlagen. In dieser Arbeit werden spikende neuronaler Netzwerke entwickelt, um Aufgaben der visuomotorischen Neurorobotik zu lösen. In der Tat entwickelten sich biologische neuronale Netze ursprünglich zur Steuerung des Körpers. Die Robotik stellt also den künstlichen Körper für das künstliche Gehirn zur Verfügung. Auf der einen Seite trägt diese Arbeit zu den gegenwärtigen Bemühungen um das Verständnis des Gehirns bei, indem sie schwierige Closed-Loop-Benchmarks liefert, ähnlich dem, was dem biologischen Gehirn widerfährt. Auf der anderen Seite werden neue Wege zur Lösung traditioneller Robotik Probleme vorgestellt, die auf vom Gehirn inspirierten Paradigmen basieren. Die Forschung wird in zwei Schritten durchgeführt. Zunächst werden vielversprechende synaptische Plastizitätsregeln identifiziert und mit ereignisbasierten Vision-Benchmarks aus der realen Welt verglichen. Zweitens werden neuartige Methoden zur Abbildung visueller Repräsentationen auf motorische Befehle vorgestellt. Neuromorphe visuelle Sensoren stellen einen wichtigen Schritt auf dem Weg zu hirninspirierten Paradigmen dar. Im Gegensatz zu herkömmlichen Kameras senden diese Sensoren Adressereignisse aus, die lokalen Änderungen der Lichtintensität entsprechen. Das ereignisbasierte Paradigma ermöglicht eine energieeffiziente und schnelle Bildverarbeitung, erfordert aber die Ableitung neuer asynchroner Algorithmen. Spikende neuronale Netze stellen eine Untergruppe von asynchronen Algorithmen dar, die vom Gehirn inspiriert und für neuromorphe Hardwaretechnologie geeignet sind. In enger Zusammenarbeit mit Computational Neuroscientists werden erfolgreiche Methoden zum Erlernen räumlich-zeitlicher Abstraktionen aus der Adressereignisdarstellung berichtet. Es wird gezeigt, dass Top-Down-Regeln der synaptischen Plastizität, die zur Optimierung einer objektiven Funktion abgeleitet wurden, die Bottom-Up-Regeln übertreffen, die allein auf Beobachtungen im Gehirn basieren. Mit dieser Einsicht wird eine neue synaptische Plastizitätsregel namens "Deep Continuous Local Learning" eingeführt, die derzeit den neuesten Stand der Technik bei ereignisbasierten Vision-Benchmarks erreicht. Diese Regel wurde während eines Aufenthalts an der Universität von Kalifornien, Irvine, gemeinsam abgeleitet, implementiert und evaluiert. Im zweiten Teil dieser Arbeit wird der visuomotorische Kreis geschlossen, indem die gelernten visuellen Repräsentationen auf motorische Befehle abgebildet werden. Drei Ansätze werden diskutiert, um ein visuomotorisches Mapping zu erhalten: manuelle Kopplung, Belohnungs-Kopplung und Minimierung des Vorhersagefehlers. Es wird gezeigt, wie diese Ansätze, welche als synaptische Plastizitätsregeln implementiert sind, verwendet werden können, um einfache Strategien und Bewegungen zu lernen. Diese Arbeit ebnet den Weg zur Integration von hirninspirierten Berechnungsparadigmen in das Gebiet der Robotik. Es wird sogar prognostiziert, dass Fortschritte in den neuromorphen Technologien und bei den Plastizitätsregeln die Entwicklung von Hochleistungs-Lernrobotern mit geringem Energieverbrauch ermöglicht

    Quantitative analysis of take-off forces in birds

    Get PDF
    The increasing interest on Unmanned Air Vehicles (UAV’s) and their several utilities blended with the need of easy carrying and also the stealth, lead to the need to create the concept of Micro Air Vehicles (MAV’s) and the Nano Air Vehicles (NAV’s). Due to the current interest and the present lack of knowledge on the insect’s and bird’s flight, this study was intended to interpret the forces involved on the moment of the take-off of a bird, recurring to an experiment involving a fast data acquisition force sensor and high speed camera, in addition known facts from earlier studies. In order to do that a bibliographic revision was done, to know what was already studied and to find what could yet be studied. That way could be formed a link on the factors involved on the propulsion of a bird at the moment of take-off. The main conclusions obtained by this work is that the bird can produce movements that will enhance the total moment when the bird stretches its neck forward and moving head down followed by stretching even more its neck and moving head up impelling himself into the air, resulting in a main role on the mechanical forces (against perch) for the bird first moments momentum. Columba livia can generate about 4 times its weight worth mechanic force (against perch) and above 8 times its weight during the 2nd downstroke.O interesse crescente nos Veículos Aéreos não Tripulados “Unmanned Air Vehicles (UAV’s)” e suas diversas utilidades em conjunto com a necessidade de seu fácil transporte e furtividade, levaram à necessidade de criar o conceito dos Micro Veículos Aéreos “Micro Air Vehicles (MAV’s)” e os Nano Veículos Aéreos “Nano Air Vehicles (NAV’s)”. Este tipo de veículos tem como fonte inspiradora os insetos e aves devido à necessária produção simultânea de sustentação e propulsão. Tal como no voo convencional, também no voo animal podem ser identificadas as fases de levantamento (descolagem) e aterragem como diferenciadas do voo longe de uma superfície de apoio. Este trabalho é dedicado ao estudo da fase de levantamento de voo de uma ave columba livia. Foram realizadas experiências para medir a força inicial produzida pela ave para iniciar o voo e a respetiva trajetória na zona próxima do ponto de apoio inicial. Estas medidas foram efetuadas com um sensor de força dotado de elevada velocidade de aquisição de dados e uma camara de alta velocidade. As principais conclusões obtidas com a realização deste trabalho é o facto de que a ave consegue produzir movimentos, que aumentar o momento total quando a ave estica o pescoço para a frente e movendo a cabeça para baixo seguido por continuação de esticamento do pescoço e movimento da cabeça para cima impelindo-se para o ar, resultando num papel principal relativamente às forças mecânicas (contra o poleiro) para o momento linear actuante nos primeiros momentos. Columba livia consegue gerar cerca de 4 vezes o seu peso em força mecânica e acima de 8 vezes o seu peso durante o 2º downstroke

    Automation and Robotics: Latest Achievements, Challenges and Prospects

    Get PDF
    This SI presents the latest achievements, challenges and prospects for drives, actuators, sensors, controls and robot navigation with reverse validation and applications in the field of industrial automation and robotics. Automation, supported by robotics, can effectively speed up and improve production. The industrialization of complex mechatronic components, especially robots, requires a large number of special processes already in the pre-production stage provided by modelling and simulation. This area of research from the very beginning includes drives, process technology, actuators, sensors, control systems and all connections in mechatronic systems. Automation and robotics form broad-spectrum areas of research, which are tightly interconnected. To reduce costs in the pre-production stage and to reduce production preparation time, it is necessary to solve complex tasks in the form of simulation with the use of standard software products and new technologies that allow, for example, machine vision and other imaging tools to examine new physical contexts, dependencies and connections

    Data-driven Mechanical Design and Control Method of Dexterous Upper-Limb Prosthesis

    Get PDF
    With an increasing number of people, 320,000 per year, suffering from impaired upper limb function due to various medical conditions like stroke and blunt trauma, the demand for highly functional upper limb prostheses is increasing; however, the rates of rejection of prostheses are high due to factors such as lack of functionality, high cost, weight, and lack of sensory feedback. Modern robotics has led to the development of more affordable and dexterous upper limb prostheses with mostly anthropomorphic designs. However, due to the highly sophisticated ergonomics of anthropomorphic hands, most are economically prohibitive and suffer from control complexity due to increased cognitive load on the user. Thus, this thesis work aims to design a prosthesis that relies on the emulation of the kinematics and contact forces involved in grasping tasks with healthy human hands rather than on biomimicry for reduction of mechanical complexity and utilization of technologically advanced engineering components. This is accomplished by 1) experimentally characterizing human grasp kinematics and kinetics as a basis for data-driven prosthesis design. Using the grasp data, steps are taken to 2) develop a data-driven design and control method of an upper limb prosthesis that shares the kinematics and kinetics required for healthy human grasps without taking the anthropomorphic design. This thesis demonstrates an approach to decrease the gap between the functionality of the human hand and robotic upper limb prostheses by introducing a method to optimize the design and control method of an upper limb prosthesis. This is accomplished by first, collecting grasp data from human subjects with a motion and force capture glove. The collected data are used to minimize control complexity by reducing the dimensionality of the device while fulfilling the kinematic and kinetic requirements of daily grasping tasks. Using these techniques, a task-oriented upper limb prosthesis is prototyped and tested in simulation and physical environment.Ph.D
    corecore