191 research outputs found

    Automotive gestures recognition based on capacitive sensing

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresDriven by technological advancements, vehicles have steadily increased in sophistication, specially in the way drivers and passengers interact with their vehicles. For example, the BMW 7 series driver-controlled systems, contains over 700 functions. Whereas, it makes easier to navigate streets, talk on phone and more, this may lead to visual distraction, since when paying attention to a task not driving related, the brain focus on that activity. That distraction is, according to studies, the third cause of accidents, only surpassed by speeding and drunk driving. Driver distraction is stressed as the main concern by regulators, in particular, National Highway Transportation Safety Agency (NHTSA), which is developing recommended limits for the amount of time a driver needs to spend glancing away from the road to operate in-car features. Diverting attention from driving can be fatal; therefore, automakers have been challenged to design safer and comfortable human-machine interfaces (HMIs) without missing the latest technological achievements. This dissertation aims to mitigate driver distraction by developing a gestural recognition system that allows the user a more comfortable and intuitive experience while driving. The developed system outlines the algorithms to recognize gestures using the capacitive technology.Impulsionados pelos avanços tecnológicos, os automóveis tem de forma continua aumentado em complexidade, sobretudo na forma como os conductores e passageiros interagem com os seus veículos. Por exemplo, os sistemas controlados pelo condutor do BMW série 7 continham mais de 700 funções. Embora, isto facilite a navegação entre locais, falar ao telemóvel entre outros, isso pode levar a uma distração visual, já que ao prestar atenção a uma tarefa não relacionados com a condução, o cérebro se concentra nessa atividade. Essa distração é, de acordo com os estudos, a terceira causa de acidentes, apenas ultrapassada pelo excesso de velocidade e condução embriagada. A distração do condutor é realçada como a principal preocupação dos reguladores, em particular, a National Highway Transportation Safety Agency (NHTSA), que está desenvolvendo os limites recomendados para a quantidade de tempo que um condutor precisa de desviar o olhar da estrada para controlar os sistemas do carro. Desviar a atenção da conducção, pode ser fatal; portanto, os fabricante de automóveis têm sido desafiados a projetar interfaces homemmáquina (HMIs) mais seguras e confortáveis, sem perder as últimas conquistas tecnológicas. Esta dissertação tem como objetivo minimizar a distração do condutor, desenvolvendo um sistema de reconhecimento gestual que permite ao utilizador uma experiência mais confortável e intuitiva ao conduzir. O sistema desenvolvido descreve os algoritmos de reconhecimento de gestos usando a tecnologia capacitiva.It is worth noting that this work has been financially supported by the Portugal Incentive System for Research and Technological Development in scope of the projects in co-promotion number 036265/2013 (HMIExcel 2013-2015), number 002814/2015 (iFACTORY 2015-2018) and number 002797/2015 (INNOVCAR 2015-2018)

    Foot Gesture Recognition Using High-Compression Radar Signature Image and Deep Learning

    Get PDF
    Recently, Doppler radar‐based foot gesture recognition has attracted attention as a hands-free tool. Doppler radar‐based recognition for various foot gestures is still very challenging. So far, no studies have yet dealt deeply with recognition of various foot gestures based on Doppler radar and a deep learning model. In this paper, we propose a method of foot gesture recognition using a new high‐compression radar signature image and deep learning. By means of a deep learning AlexNet model, a new high‐compression radar signature is created by extracting dominant features via Singular Value Decomposition (SVD) processing; four different foot gestures including kicking, swinging, sliding, and tapping are recognized. Instead of using an original radar signature, the proposed method improves the memory efficiency required for deep learning training by using a high-compression radar signature. Original and reconstructed radar images with high compression values of 90%, 95%, and 99% were applied for the deep learning AlexNet model. As experimental results, movements of all four different foot gestures and of a rolling baseball were recognized with an accuracy of approximately 98.64%. In the future, due to the radar’s inherent robustness to the surrounding environment, this foot gesture recognition sensor using Doppler radar and deep learning will be widely useful in future automotive and smart home industry fields. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).1

    MultiIoT: Towards Large-scale Multisensory Learning for the Internet of Things

    Full text link
    The Internet of Things (IoT), the network integrating billions of smart physical devices embedded with sensors, software, and communication technologies for the purpose of connecting and exchanging data with other devices and systems, is a critical and rapidly expanding component of our modern world. The IoT ecosystem provides a rich source of real-world modalities such as motion, thermal, geolocation, imaging, depth, sensors, video, and audio for prediction tasks involving the pose, gaze, activities, and gestures of humans as well as the touch, contact, pose, 3D of physical objects. Machine learning presents a rich opportunity to automatically process IoT data at scale, enabling efficient inference for impact in understanding human wellbeing, controlling physical devices, and interconnecting smart cities. To develop machine learning technologies for IoT, this paper proposes MultiIoT, the most expansive IoT benchmark to date, encompassing over 1.15 million samples from 12 modalities and 8 tasks. MultiIoT introduces unique challenges involving (1) learning from many sensory modalities, (2) fine-grained interactions across long temporal ranges, and (3) extreme heterogeneity due to unique structure and noise topologies in real-world sensors. We also release a set of strong modeling baselines, spanning modality and task-specific methods to multisensory and multitask models to encourage future research in multisensory representation learning for IoT

    Capacitive Sensing and Communication for Ubiquitous Interaction and Environmental Perception

    Get PDF
    During the last decade, the functionalities of electronic devices within a living environment constantly increased. Besides the personal computer, now tablet PCs, smart household appliances, and smartwatches enriched the technology landscape. The trend towards an ever-growing number of computing systems has resulted in many highly heterogeneous human-machine interfaces. Users are forced to adapt to technology instead of having the technology adapt to them. Gathering context information about the user is a key factor for improving the interaction experience. Emerging wearable devices show the benefits of sophisticated sensors which make interaction more efficient, natural, and enjoyable. However, many technologies still lack of these desirable properties, motivating me to work towards new ways of sensing a user's actions and thus enriching the context. In my dissertation I follow a human-centric approach which ranges from sensing hand movements to recognizing whole-body interactions with objects. This goal can be approached with a vast variety of novel and existing sensing approaches. I focused on perceiving the environment with quasi-electrostatic fields by making use of capacitive coupling between devices and objects. Following this approach, it is possible to implement interfaces that are able to recognize gestures, body movements and manipulations of the environment at typical distances up to 50cm. These sensors usually have a limited resolution and can be sensitive to other conductive objects or electrical devices that affect electric fields. The technique allows for designing very energy-efficient and high-speed sensors that can be deployed unobtrusively underneath any kind of non-conductive surface. Compared to other sensing techniques, exploiting capacitive coupling also has a low impact on a user's perceived privacy. In this work, I also aim at enhancing the interaction experience with new perceptional capabilities based on capacitive coupling. I follow a bottom-up methodology and begin by presenting two low-level approaches for environmental perception. In order to perceive a user in detail, I present a rapid prototyping toolkit for capacitive proximity sensing. The prototyping toolkit shows significant advancements in terms of temporal and spatial resolution. Due to some limitations, namely the inability to determine the identity and fine-grained manipulations of objects, I contribute a generic method for communications based on capacitive coupling. The method allows for designing highly interactive systems that can exchange information through air and the human body. I furthermore show how human body parts can be recognized from capacitive proximity sensors. The method is able to extract multiple object parameters and track body parts in real-time. I conclude my thesis with contributions in the domain of context-aware devices and explicit gesture-recognition systems

    Intelligent in-vehicle interaction technologies

    Get PDF
    With rapid advances in the field of autonomous vehicles (AVs), the ways in which human–vehicle interaction (HVI) will take place inside the vehicle have attracted major interest and, as a result, intelligent interiors are being explored to improve the user experience, acceptance, and trust. This is also fueled by parallel research in areas such as perception and control of robots, safe human–robot interaction, wearable systems, and the underpinning flexible/printed electronics technologies. Some of these are being routed to AVs. Growing number of network of sensors are being integrated into the vehicles for multimodal interaction to draw correct inferences of the communicative cues from the user and to vary the interaction dynamics depending on the cognitive state of the user and contextual driving scenario. In response to this growing trend, this timely article presents a comprehensive review of the technologies that are being used or developed to perceive user's intentions for natural and intuitive in-vehicle interaction. The challenges that are needed to be overcome to attain truly interactive AVs and their potential solutions are discussed along with various new avenues for future research

    Sensors for Robotic Hands: A Survey of State of the Art

    Get PDF
    Recent decades have seen significant progress in the field of artificial hands. Most of the surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands

    Application and validation of capacitive proximity sensing systems in smart environments

    Get PDF
    Smart environments feature a number of computing and sensing devices that support occupants in performing their tasks. In the last decades there has been a multitude of advances in miniaturizing sensors and computers, while greatly increasing their performance. As a result new devices are introduced into our daily lives that have a plethora of functions. Gathering information about the occupants is fundamental in adapting the smart environment according to preference and situation. There is a large number of different sensing devices available that can provide information about the user. They include cameras, accelerometers, GPS, acoustic systems, or capacitive sensors. The latter use the properties of an electric field to sense presence and properties of conductive objects within range. They are commonly employed in finger-controlled touch screens that are present in billions of devices. A less common variety is the capacitive proximity sensor. It can detect the presence of the human body over a distance, providing interesting applications in smart environments. Choosing the right sensor technology is an important decision in designing a smart environment application. Apart from looking at previous use cases, this process can be supported by providing more formal methods. In this work I present a benchmarking model that is designed to support this decision process for applications in smart environments. Previous benchmarks for pervasive systems have been adapted towards sensors systems and include metrics that are specific for smart environments. Based on distinct sensor characteristics, different ratings are used as weighting factors in calculating a benchmarking score. The method is verified using popularity matching in two scientific databases. Additionally, there are extensions to cope with central tendency bias and normalization with regards to average feature rating. Four relevant application areas are identified by applying this benchmark to applications in smart environments and capacitive proximity sensors. They are indoor localization, smart appliances, physiological sensing and gesture interaction. Any application area has a set of challenges regarding the required sensor technology, layout of the systems, and processing that can be tackled using various new or improved methods. I will present a collection of existing and novel methods that support processing data generated by capacitive proximity sensors. These are in the areas of sparsely distributed sensors, model-driven fitting methods, heterogeneous sensor systems, image-based processing and physiological signal processing. To evaluate the feasibility of these methods, several prototypes have been created and tested for performance and usability. Six of them are presented in detail. Based on these evaluations and the knowledge generated in the design process, I am able to classify capacitive proximity sensing in smart environments. This classification consists of a comparison to other popular sensing technologies in smart environments, the major benefits of capacitive proximity sensors, and their limitations. In order to support parties interested in developing smart environment applications using capacitive proximity sensors, I present a set of guidelines that support the decision process from technology selection to choice of processing methods

    An architecture for sensate robots : real time social-gesture recognition using a full body array of touch sensors

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references.Touch plays a central role in social expression but, so far, research into social touch behaviors for robots has been almost. non-existent. Embodied machines have the unique capability to sense human body language, which will enable robots to better comprehend, anticipate and respond to their human companions in a natural way.This thesis addresses the novel field of sensate touch by (1) creating the first. robot with full Body sensate touch and with on-screen visualization, (2) establishing a library of salient social gestures through behavioral studies, (3) implementing a first-pass touch gesture recognition system in real-time, an(d (4) running a small pilot study with children to evaluate classifications and test the device's acceptance/utility with humans. Such research is critical path to conceiving and advancing thee use of machine touch to better integrate robots in.to human social environments.All of the above will be incorporated into the huggable robotic teddy bear at the MIT Media Lab's Personal Robotics group and makes use of the Sensitive Skins circuit design created in Dan Stiehl's Masters thesis. This implementation substantially reduces his proposed total sensor numbers and type, modularizes sensors into two uniform shapes, and extends his valuable work on a single body sections to an evaluation of sensors over the entire surface of the robot.Heather-Marie Callanan Knight.M.Eng

    Finding Common Ground: A Survey of Capacitive Sensing in Human-Computer Interaction

    Get PDF
    For more than two decades, capacitive sensing has played a prominent role in human-computer interaction research. Capacitive sensing has become ubiquitous on mobile, wearable, and stationary devices---enabling fundamentally new interaction techniques on, above, and around them. The research community has also enabled human position estimation and whole-body gestural interaction in instrumented environments. However, the broad field of capacitive sensing research has become fragmented by different approaches and terminology used across the various domains. This paper strives to unify the field by advocating consistent terminology and proposing a new taxonomy to classify capacitive sensing approaches. Our extensive survey provides an analysis and review of past research and identifies challenges for future work. We aim to create a common understanding within the field of human-computer interaction, for researchers and practitioners alike, and to stimulate and facilitate future research in capacitive sensing

    An inertial measurement unit for user interfaces

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 2000.Includes bibliographical references (p. 131-135).Inertial measurement components, which sense either acceleration or angular rate, are being embedded into common user interface devices more frequently as their cost continues to drop dramatically. These devices hold a number of advantages over other sensing technologies: they measure relevant parameters for human interfaces and can easily be embedded into wireless, mobile platforms. The work in this dissertation demonstrates that inertial measurement can be used to acquire rich data about human gestures, that we can derive efficient algorithms for using this data in gesture recognition, and that the concept of a parameterized atomic gesture recognition has merit. Further we show that a framework combining these three levels of description can be easily used by designers to create robust applications. A wireless six degree-of-freedom inertial measurement unit (IMU), with a cubical form factor (1.25 inches on a side) was constructed to collect the data, providing updates at 15 ms intervals. This data is analyzed for periods of activity using a windowed variance algorithm, whose thresholds can be set analytically. These segments are then examined by the gesture recognition algorithms, which are applied on an axis-by-axis basis to the data. The recognized gestures are considered atomic (i.e. cannot be decomposed) and are parameterized in terms of magnitude and duration. Given these atomic gestures, a simple scripting language is developed to allow designers to combine them into full gestures of interest. It allows matching of recognized atomic gestures to prototypes based on their type, parameters and time of occurrence. Because our goal is to eventually create stand-alone devices,the algorithms designed for this framework have both low algorithmic complexity and low latency, at the price of a small loss in generality. To demonstrate this system, the gesture recognition portion of (void*): A Cast of Characters, an installation which used a pair of hand-held IMUs to capture gestural inputs, was implemented using this framework. This version ran much faster than the original version (based on Hidden Markov Models), used less processing power, and performed at least as well.by Ari Yosef Benbasat.S.M
    corecore