219 research outputs found

    Analysis and enhancement of interpersonal coordination using inertial measurement unit solutions

    Get PDF
    Die heutigen mobilen Kommunikationstechnologien haben den Umfang der verbalen und textbasierten Kommunikation mit anderen Menschen, sozialen Robotern und künstlicher Intelligenz erhöht. Auf der anderen Seite reduzieren diese Technologien die nonverbale und die direkte persönliche Kommunikation, was zu einer gesellschaftlichen Thematik geworden ist, weil die Verringerung der direkten persönlichen Interaktionen eine angemessene Wahrnehmung sozialer und umgebungsbedingter Reizmuster erschweren und die Entwicklung allgemeiner sozialer Fähigkeiten bremsen könnte. Wissenschaftler haben aktuell die Bedeutung nonverbaler zwischenmenschlicher Aktivitäten als soziale Fähigkeiten untersucht, indem sie menschliche Verhaltensmuster in Zusammenhang mit den jeweilgen neurophysiologischen Aktivierungsmustern analzsiert haben. Solche Querschnittsansätze werden auch im Forschungsprojekt der Europäischen Union "Socializing sensori-motor contingencies" (socSMCs) verfolgt, das darauf abzielt, die Leistungsfähigkeit sozialer Roboter zu verbessern und Autismus-Spektrumsstörungen (ASD) adäquat zu behandeln. In diesem Zusammenhang ist die Modellierung und das Benchmarking des Sozialverhaltens gesunder Menschen eine Grundlage für theorieorientierte und experimentelle Studien zum weiterführenden Verständnis und zur Unterstützung interpersoneller Koordination. In diesem Zusammenhang wurden zwei verschiedene empirische Kategorien in Abhängigkeit von der Entfernung der Interagierenden zueinander vorgeschlagen: distale vs. proximale Interaktionssettings, da sich die Struktur der beteiligten kognitiven Systeme zwischen den Kategorien ändert und sich die Ebene der erwachsenden socSMCs verschiebt. Da diese Dissertation im Rahmen des socSMCs-Projekts entstanden ist, wurden Interaktionssettings für beide Kategorien (distal und proximal) entwickelt. Zudem wurden Ein-Sensor-Lösungen zur Reduzierung des Messaufwands (und auch der Kosten) entwickelt, um eine Messung ausgesuchter Verhaltensparameter bei einer Vielzahl von Menschen und sozialen Interaktionen zu ermöglichen. Zunächst wurden Algorithmen für eine kopfgetragene Trägheitsmesseinheit (H-IMU) zur Messung der menschlichen Kinematik als eine Ein-Sensor-Lösung entwickelt. Die Ergebnisse bestätigten, dass die H-IMU die eigenen Gangparameter unabhängig voneinander allein auf Basis der Kopfkinematik messen kann. Zweitens wurden—als ein distales socSMC-Setting—die interpersonellen Kopplungen mit einem Bezug auf drei interagierende Merkmale von „Übereinstimmung“ (engl.: rapport) behandelt: Positivität, gegenseitige Aufmerksamkeit und Koordination. Die H-IMUs überwachten bestimmte soziale Verhaltensereignisse, die sich auf die Kinematik der Kopforientierung und Oszillation während des Gehens und Sprechens stützen, so dass der Grad der Übereinstimmung geschätzt werden konnte. Schließlich belegten die Ergebnisse einer experimentellen Studie, die zu einer kollaborativen Aufgabe mit der entwickelten IMU-basierten Tablet-Anwendung durchgeführt wurde, unterschiedliche Wirkungen verschiedener audio-motorischer Feedbackformen für eine Unterstützung der interpersonellen Koordination in der Kategorie proximaler sensomotorischer Kontingenzen. Diese Dissertation hat einen intensiven interdisziplinären Charakter: Technologische Anforderungen in den Bereichen der Sensortechnologie und der Softwareentwicklung mussten in direktem Bezug auf vordefinierte verhaltenswissenschaftliche Fragestellungen entwickelt und angewendet bzw. gelöst werden—und dies in zwei unterschiedlichen Domänen (distal, proximal). Der gegebene Bezugsrahmen wurde als eine große Herausforderung bei der Entwicklung der beschriebenen Methoden und Settings wahrgenommen. Die vorgeschlagenen IMU-basierten Lösungen könnten dank der weit verbreiteten IMU-basierten mobilen Geräte zukünftig in verschiedene Anwendungen perspektiv reich integriert werden.Today’s mobile communication technologies have increased verbal and text-based communication with other humans, social robots and intelligent virtual assistants. On the other hand, the technologies reduce face-to-face communication. This social issue is critical because decreasing direct interactions may cause difficulty in reading social and environmental cues, thereby impeding the development of overall social skills. Recently, scientists have studied the importance of nonverbal interpersonal activities to social skills, by measuring human behavioral and neurophysiological patterns. These interdisciplinary approaches are in line with the European Union research project, “Socializing sensorimotor contingencies” (socSMCs), which aims to improve the capability of social robots and properly deal with autism spectrum disorder (ASD). Therefore, modelling and benchmarking healthy humans’ social behavior are fundamental to establish a foundation for research on emergence and enhancement of interpersonal coordination. In this research project, two different experimental settings were categorized depending on interactants’ distance: distal and proximal settings, where the structure of engaged cognitive systems changes, and the level of socSMCs differs. As a part of the project, this dissertation work referred to this spatial framework. Additionally, single-sensor solutions were developed to reduce costs and efforts in measuring human behaviors, recognizing the social behaviors, and enhancing interpersonal coordination. First of all, algorithms using a head worn inertial measurement unit (H-IMU) were developed to measure human kinematics, as a baseline for social behaviors. The results confirmed that the H-IMU can measure individual gait parameters by analyzing only head kinematics. Secondly, as a distal sensorimotor contingency, interpersonal relationship was considered with respect to a dynamic structure of three interacting components: positivity, mutual attentiveness, and coordination. The H-IMUs monitored the social behavioral events relying on kinematics of the head orientation and oscillation during walk and talk, which can contribute to estimate the level of rapport. Finally, in a new collaborative task with the proposed IMU-based tablet application, results verified effects of different auditory-motor feedbacks on the enhancement of interpersonal coordination in a proximal setting. This dissertation has an intensive interdisciplinary character: Technological development, in the areas of sensor and software engineering, was required to apply to or solve issues in direct relation to predefined behavioral scientific questions in two different settings (distal and proximal). The given frame served as a reference in the development of the methods and settings in this dissertation. The proposed IMU-based solutions are also promising for various future applications due to widespread wearable devices with IMUs.European Commission/HORIZON2020-FETPROACT-2014/641321/E

    Crowd-based cognitive perception of the physical world: Towards the internet of senses

    Get PDF
    This paper introduces a possible architecture and discusses the research directions for the realization of the Cognitive Perceptual Internet (CPI), which is enabled by the convergence of wired and wireless communications, traditional sensor networks, mobile crowd-sensing, and machine learning techniques. The CPI concept stems from the fact that mobile devices, such as smartphones and wearables, are becoming an outstanding mean for zero-effort world-sensing and digitalization thanks to their pervasive diffusion and the increasing number of embedded sensors. Data collected by such devices provide unprecedented insights into the physical world that can be inferred through cognitive processes, thus originating a digital sixth sense. In this paper, we describe how the Internet can behave like a sensing brain, thus evolving into the Internet of Senses, with network-based cognitive perception and action capabilities built upon mobile crowd-sensing mechanisms. The new concept of hyper-map is envisioned as an efficient geo-referenced repository of knowledge about the physical world. Such knowledge is acquired and augmented through heterogeneous sensors, multi-user cooperation and distributed learning mechanisms. Furthermore, we indicate the possibility to accommodate proactive sensors, in addition to common reactive sensors such as cameras, antennas, thermometers and inertial measurement units, by exploiting massive antenna arrays at millimeter-waves to enhance mobile terminals perception capabilities as well as the range of new applications. Finally, we distillate some insights about the challenges arising in the realization of the CPI, corroborated by preliminary results, and we depict a futuristic scenario where the proposed Internet of Senses becomes true

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    A Sound Approach Toward a Mobility Aid for Blind and Low-Vision Individuals

    Get PDF
    Reduced independent mobility of blind and low-vision individuals (BLVIs) cause considerable societal cost, burden on relatives, and reduced quality of life for the individuals, including increased anxiety, depression symptoms, need of assistance, risk of falls, and mortality. Despite the numerous electronic travel aids proposed since at least the 1940’s, along with ever-advancing technology, the mobility issues persist. A substantial reason for this is likely several and severe shortcomings of the field, both in regards to aid design and evaluation.In this work, these shortcomings are addressed with a generic design model called Desire of Use (DoU), which describes the desire of a given user to use an aid for a given activity. It is then applied on mobility of BLVIs (DoU-MoB), to systematically illuminate and structure possibly all related aspects that such an aid needs to aptly deal with, in order for it to become an adequate aid for the objective. These aspects can then both guide user-centered design as well as choice of test methods and measures.One such measure is then demonstrated in the Desire of Use Questionnaire for Mobility of Blind and Low-Vision Individuals (DoUQ-MoB), an aid-agnostic and comprehensive patient-reported outcome measure. The question construction originates from the DoU-MoB to ensure an encompassing focus on mobility of BLVIs, something that has been missing in the field. Since it is aid-agnostic it facilitates aid comparison, which it also actively promotes. To support the reliability of the DoUQ-MoB, it utilizes the best known practices of questionnaire design and has been validated once with eight orientation and mobility professionals, and six BLVIs. Based on this, the questionnaire has also been revised once.To allow for relevant and reproducible methodology, another tool presented herein is a portable virtual reality (VR) system called the Parrot-VR. It uses a hybrid control scheme of absolute rotation by tracking the user’s head in reality, affording intuitive turning; and relative movement where simple button presses on a controller moves the virtual avatar forward and backward, allowing for large-scale traversal while not walking physically. VR provides excellent reproducibility, making various aggregate movement analysis feasible, while it is also inherently safe. Meanwhile, the portability of the system facilitates testing near the participants, substantially increasing the number of potential blind and low-vision recruits for user tests.The thesis also gives a short account on the state of long-term testing in the field; it being short is mainly due to that there is not much to report. It then provides an initial investigation into possible outcome measures for such tests by taking instruments in use by Swedish orientation and mobility professionals as a starting point. Two of these are also piloted in an initial single-session trial with 19 BLVIs, and could plausibly be used for long-term tests after further evaluation.Finally, a discussion is presented regarding the Audomni project — the development of a primary mobility aid for BLVIs. Audomni is a visuo-auditory sensory supplementation device, which aims to take visual information and translate it to sound. A wide field-of-view, 3D-depth camera records the environment, which is then transformed to audio through the sonification algorithms of Audomni, and finally presented in a pair of open-ear headphones that do not block out environmental sounds. The design of Audomni leverages the DoU-MoB to ensure user-centric development and evaluation, in the aim of reaching an aid with such form and function that it grants the users better mobility, while the users still want to use it.Audomni has been evaluated with user tests twice, once in pilot tests with two BLVIs, and once in VR with a heterogenous set of 19 BLVIs, utilizing the Parrot-VR and the DoUQ-MoB. 76 % of responders (13 / 17) answered that it was very or extremely likely that they would want use Audomni along with their current aid. This might be the first result in the field demonstrating a majority of blind and low-vision participants reporting that they actually want to use a new electronic travel aid. This shows promise that eventual long-term tests will demonstrate an increased mobility of blind and low-vision users — the overarching project aim. Such results would ultimately mean that Audomni can become an aid that alleviates societal cost, reduces burden on relatives, and improves users’ quality of life and independence

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space

    Localization performance evaluation of extended kalman filter in wireless sensors network

    Get PDF
    This paper evaluates the positioning and tracking performance of Extended Kalman Filter (EKF) in wireless sensors network. The EKF is a linear approximation of statistical Kalman Filter (KF) and has the capability to work efficiently in non-linear systems. The EKF is based on an iterative process of estimating current state information from the previously estimated state. Its working is based on the linearization of observation model around the mean of current state information. The EKF has small computation complexity and requires low memory compared to other Bayesian algorithms which makes it very suitable for low powered mobile devices. This paper evaluates the localization and tracking performance of EKF for (i) Position (P) model, (ii) Position-Velocity (PV) model and (iii) Position-Velocity-Acceleration (PVA) model. The EKF processes distance measurements from cricket sensors that are acquired through time difference of arrival between ultrasound and Radio Frequency (RF) signals. Further, localization performance under varying number of beacons/sensors is also evaluated in this paper. © 2014 Published by Elsevier B.V.Peer ReviewedPostprint (published version

    A Low-Cost Wireless Body Area Network for Human Activity Recognition in Healthy Life and Medical Applications

    Get PDF
    Moved by the necessity, also related to the ongoing COVID-19 pandemic, of the design of innovative solutions in the context of digital health, and digital medicine, Wireless Body Area Networks (WBANs) are more and more emerging as a central system for the implementation of solutions for well-being and healthcare. In fact, by elaborating the data collected by a WBAN, advanced classification models can accurately extract health-related parameters, thus allowing, as examples, the implementations of applications for fitness tracking, monitoring of vital signs, diagnosis, and analysis of the evolution of diseases, and, in general, monitoring of human activities and behaviours. Unfortunately, commercially available WBANs present some technological and economic drawbacks from the point of view, respectively, of data fusion and labelling, and cost of the adopted devices. To overcome existing issues, in this paper, we present the architecture of a low-cost WBAN, which is built upon accessible off-the-shelf wearable devices and an Android application. Then, we report its technical evaluation concerning resource consumption. Finally, we demonstrate its versatility and accuracy in both medical and well-being application scenarios.publishedVersio

    Nanotechnology for Humans and Humanoids A vision of the use of nanotechnology in future robotics

    Get PDF
    Humanoids will soon co-exist with humans, helping us at home and at work, assisting elder people, replacing us in dangerous environments and somewhat adding to our personal communication devices the capability to actuate motion. In order for humanoids to be compatible with our everyday tools and our lifestyle it is however mandatory to reproduce (at least partially) the body-mind nexus that makes humans so superior to machines. This requires a totally new approach to humanoid technologies, combining new responsive and soft materials, bioinspired sensors, high efficiency power sources and cognition/intelligence of low computational cost: in other words, an unprecedented merge of nanotechnology, cognition and mechatronics

    Spatial representation and visual impairement - Developmental trends and new technological tools for assessment and rehabilitation

    Get PDF
    It is well known that perception is mediated by the five sensory modalities (sight, hearing, touch, smell and taste), which allows us to explore the world and build a coherent spatio-temporal representation of the surrounding environment. Typically, our brain collects and integrates coherent information from all the senses to build a reliable spatial representation of the world. In this sense, perception emerges from the individual activity of distinct sensory modalities, operating as separate modules, but rather from multisensory integration processes. The interaction occurs whenever inputs from the senses are coherent in time and space (Eimer, 2004). Therefore, spatial perception emerges from the contribution of unisensory and multisensory information, with a predominant role of visual information for space processing during the first years of life. Despite a growing body of research indicates that visual experience is essential to develop spatial abilities, to date very little is known about the mechanisms underpinning spatial development when the visual input is impoverished (low vision) or missing (blindness). The thesis's main aim is to increase knowledge about the impact of visual deprivation on spatial development and consolidation and to evaluate the effects of novel technological systems to quantitatively improve perceptual and cognitive spatial abilities in case of visual impairments. Chapter 1 summarizes the main research findings related to the role of vision and multisensory experience on spatial development. Overall, such findings indicate that visual experience facilitates the acquisition of allocentric spatial capabilities, namely perceiving space according to a perspective different from our body. Therefore, it might be stated that the sense of sight allows a more comprehensive representation of spatial information since it is based on environmental landmarks that are independent of body perspective. Chapter 2 presents original studies carried out by me as a Ph.D. student to investigate the developmental mechanisms underpinning spatial development and compare the spatial performance of individuals with affected and typical visual experience, respectively visually impaired and sighted. Overall, these studies suggest that vision facilitates the spatial representation of the environment by conveying the most reliable spatial reference, i.e., allocentric coordinates. However, when visual feedback is permanently or temporarily absent, as in the case of congenital blindness or blindfolded individuals, respectively, compensatory mechanisms might support the refinement of haptic and auditory spatial coding abilities. The studies presented in this chapter will validate novel experimental paradigms to assess the role of haptic and auditory experience on spatial representation based on external (i.e., allocentric) frames of reference. Chapter 3 describes the validation process of new technological systems based on unisensory and multisensory stimulation, designed to rehabilitate spatial capabilities in case of visual impairment. Overall, the technological validation of new devices will provide the opportunity to develop an interactive platform to rehabilitate spatial impairments following visual deprivation. Finally, Chapter 4 summarizes the findings reported in the previous Chapters, focusing the attention on the consequences of visual impairment on the developmental of unisensory and multisensory spatial experience in visually impaired children and adults compared to sighted peers. It also wants to highlight the potential role of novel experimental tools to validate the use to assess spatial competencies in response to unisensory and multisensory events and train residual sensory modalities under a multisensory rehabilitation
    • …
    corecore