349 research outputs found

    Automotive gestures recognition based on capacitive sensing

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresDriven by technological advancements, vehicles have steadily increased in sophistication, specially in the way drivers and passengers interact with their vehicles. For example, the BMW 7 series driver-controlled systems, contains over 700 functions. Whereas, it makes easier to navigate streets, talk on phone and more, this may lead to visual distraction, since when paying attention to a task not driving related, the brain focus on that activity. That distraction is, according to studies, the third cause of accidents, only surpassed by speeding and drunk driving. Driver distraction is stressed as the main concern by regulators, in particular, National Highway Transportation Safety Agency (NHTSA), which is developing recommended limits for the amount of time a driver needs to spend glancing away from the road to operate in-car features. Diverting attention from driving can be fatal; therefore, automakers have been challenged to design safer and comfortable human-machine interfaces (HMIs) without missing the latest technological achievements. This dissertation aims to mitigate driver distraction by developing a gestural recognition system that allows the user a more comfortable and intuitive experience while driving. The developed system outlines the algorithms to recognize gestures using the capacitive technology.Impulsionados pelos avanços tecnológicos, os automóveis tem de forma continua aumentado em complexidade, sobretudo na forma como os conductores e passageiros interagem com os seus veículos. Por exemplo, os sistemas controlados pelo condutor do BMW série 7 continham mais de 700 funções. Embora, isto facilite a navegação entre locais, falar ao telemóvel entre outros, isso pode levar a uma distração visual, já que ao prestar atenção a uma tarefa não relacionados com a condução, o cérebro se concentra nessa atividade. Essa distração é, de acordo com os estudos, a terceira causa de acidentes, apenas ultrapassada pelo excesso de velocidade e condução embriagada. A distração do condutor é realçada como a principal preocupação dos reguladores, em particular, a National Highway Transportation Safety Agency (NHTSA), que está desenvolvendo os limites recomendados para a quantidade de tempo que um condutor precisa de desviar o olhar da estrada para controlar os sistemas do carro. Desviar a atenção da conducção, pode ser fatal; portanto, os fabricante de automóveis têm sido desafiados a projetar interfaces homemmáquina (HMIs) mais seguras e confortáveis, sem perder as últimas conquistas tecnológicas. Esta dissertação tem como objetivo minimizar a distração do condutor, desenvolvendo um sistema de reconhecimento gestual que permite ao utilizador uma experiência mais confortável e intuitiva ao conduzir. O sistema desenvolvido descreve os algoritmos de reconhecimento de gestos usando a tecnologia capacitiva.It is worth noting that this work has been financially supported by the Portugal Incentive System for Research and Technological Development in scope of the projects in co-promotion number 036265/2013 (HMIExcel 2013-2015), number 002814/2015 (iFACTORY 2015-2018) and number 002797/2015 (INNOVCAR 2015-2018)

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey

    Full text link
    Due to the fact that roughly sixty percent of the human body is essentially composed of water, the human body is inherently a conductive object, being able to, firstly, form an inherent electric field from the body to the surroundings and secondly, deform the distribution of an existing electric field near the body. Body-area capacitive sensing, also called body-area electric field sensing, is becoming a promising alternative for wearable devices to accomplish certain tasks in human activity recognition and human-computer interaction. Over the last decade, researchers have explored plentiful novel sensing systems backed by the body-area electric field. On the other hand, despite the pervasive exploration of the body-area electric field, a comprehensive survey does not exist for an enlightening guideline. Moreover, the various hardware implementations, applied algorithms, and targeted applications result in a challenging task to achieve a systematic overview of the subject. This paper aims to fill in the gap by comprehensively summarizing the existing works on body-area capacitive sensing so that researchers can have a better view of the current exploration status. To this end, we first sorted the explorations into three domains according to the involved body forms: body-part electric field, whole-body electric field, and body-to-body electric field, and enumerated the state-of-art works in the domains with a detailed survey of the backed sensing tricks and targeted applications. We then summarized the three types of sensing frontends in circuit design, which is the most critical part in body-area capacitive sensing, and analyzed the data processing pipeline categorized into three kinds of approaches. Finally, we described the challenges and outlooks of body-area electric sensing

    Sense, Think, Grasp: A study on visual and tactile information processing for autonomous manipulation

    Get PDF
    Interacting with the environment using hands is one of the distinctive abilities of humans with respect to other species. This aptitude reflects on the crucial role played by objects\u2019 manipulation in the world that we have shaped for us. With a view of bringing robots outside industries for supporting people during everyday life, the ability of manipulating objects autonomously and in unstructured environments is therefore one of the basic skills they need. Autonomous manipulation is characterized by great complexity especially regarding the processing of sensors information to perceive the surrounding environment. Humans rely on vision for wideranging tridimensional information, prioprioception for the awareness of the relative position of their own body in the space and the sense of touch for local information when physical interaction with objects happens. The study of autonomous manipulation in robotics aims at transferring similar perceptive skills to robots so that, combined with state of the art control techniques, they could be able to achieve similar performance in manipulating objects. The great complexity of this task makes autonomous manipulation one of the open problems in robotics that has been drawing increasingly the research attention in the latest years. In this work of Thesis, we propose possible solutions to some key components of autonomous manipulation, focusing in particular on the perception problem and testing the developed approaches on the humanoid robotic platform iCub. When available, vision is the first source of information to be processed for inferring how to interact with objects. The object modeling and grasping pipeline based on superquadric functions we designed meets this need, since it reconstructs the object 3D model from partial point cloud and computes a suitable hand pose for grasping the object. Retrieving objects information with touch sensors only is a relevant skill that becomes crucial when vision is occluded, as happens for instance during physical interaction with the object. We addressed this problem with the design of a novel tactile localization algorithm, named Memory Unscented Particle Filter, capable of localizing and recognizing objects relying solely on 3D contact points collected on the object surface. Another key point of autonomous manipulation we report on in this Thesis work is bi-manual coordination. The execution of more advanced manipulation tasks in fact might require the use and coordination of two arms. Tool usage for instance often requires a proper in-hand object pose that can be obtained via dual-arm re-grasping. In pick-and-place tasks sometimes the initial and target position of the object do not belong to the same arm workspace, then requiring to use one hand for lifting the object and the other for locating it in the new position. At this regard, we implemented a pipeline for executing the handover task, i.e. the sequences of actions for autonomously passing an object from one robot hand on to the other. The contributions described thus far address specific subproblems of the more complex task of autonomous manipulation. This actually differs from what humans do, in that humans develop their manipulation skills by learning through experience and trial-and-error strategy. Aproper mathematical formulation for encoding this learning approach is given by Deep Reinforcement Learning, that has recently proved to be successful in many robotics applications. For this reason, in this Thesis we report also on the six month experience carried out at Berkeley Artificial Intelligence Research laboratory with the goal of studying Deep Reinforcement Learning and its application to autonomous manipulation

    Neuromorphic vision-based tactile sensor for robotic grasp

    Get PDF
    Tactile sensors are developed to mimic human sense of touch in robotics. The touch sense is essential for machines to interact with environment. Several approaches have been studied to obtain rich information from the contact point to correct robot’s actions and acquire further information about the objects. Vision-based tactile sensors aim to extract tactile information by observing the contact point between the robot’s hand and environment and applying computer vision algorithms. In this thesis, a novel class of vision-based tactile sensors is proposed, "Neuromorphic Vision-Based Tactile Sensor" to estimate the contact force and classify materials in a grasp. This novel approach utilises a neuromorphic vision sensor to capture intensity changes (events) in the contact point. The triggered events represent changes in the contact force at each pixel in microseconds. The proposed sensor has a high temporal resolution and dynamic range which are suitable for high-speed robotic applications. Initially, a general framework is demonstrated to show the sensor operations. Furthermore, the relationship between events and the contact force is presented. Afterwards, methods based on Time-Delay Neural Networks (TDNN), Gaussian Process (GP) and Deep Neural Networks (DNN) are developed to estimate the contact force and classify objects material from the accumulation of events. The results indicate a low mean squared error of 0.17N against a force sensor for the force estimation using TDNN. Moreover, the objects materials are classified with 79.12% accuracy which is 30% higher compared to piezoresistive force sensors. This is followed by an approach to preserve spatio-temporal information during the learning process. Therefore, the triggered events are framed (event-frames) within a time window to preserve spatial information. Afterwards, multiple types of Long Short-Term Memory (LSTM) networks with convolutional layers are developed to estimate the contact force for objects with different size. The results are validated against a force sensor and achieve a mean squared error of less than 0.1N. Finally, algorithmic augmentation techniques are investigated to improve the networks accuracy for a wider range of force. Image-based and time-series augmentation methods are developed to generate artificial samples for training the network. A novel time-domain approach Temporal Event Shifting (TES) is proposed to augment events by preserving the spatial information of events. The results are validated on real experiments which indicate that time-domain and hybrid augmentation methods improve the networks’ accuracy significantly considering an object with a different size

    Application and validation of capacitive proximity sensing systems in smart environments

    Get PDF
    Smart environments feature a number of computing and sensing devices that support occupants in performing their tasks. In the last decades there has been a multitude of advances in miniaturizing sensors and computers, while greatly increasing their performance. As a result new devices are introduced into our daily lives that have a plethora of functions. Gathering information about the occupants is fundamental in adapting the smart environment according to preference and situation. There is a large number of different sensing devices available that can provide information about the user. They include cameras, accelerometers, GPS, acoustic systems, or capacitive sensors. The latter use the properties of an electric field to sense presence and properties of conductive objects within range. They are commonly employed in finger-controlled touch screens that are present in billions of devices. A less common variety is the capacitive proximity sensor. It can detect the presence of the human body over a distance, providing interesting applications in smart environments. Choosing the right sensor technology is an important decision in designing a smart environment application. Apart from looking at previous use cases, this process can be supported by providing more formal methods. In this work I present a benchmarking model that is designed to support this decision process for applications in smart environments. Previous benchmarks for pervasive systems have been adapted towards sensors systems and include metrics that are specific for smart environments. Based on distinct sensor characteristics, different ratings are used as weighting factors in calculating a benchmarking score. The method is verified using popularity matching in two scientific databases. Additionally, there are extensions to cope with central tendency bias and normalization with regards to average feature rating. Four relevant application areas are identified by applying this benchmark to applications in smart environments and capacitive proximity sensors. They are indoor localization, smart appliances, physiological sensing and gesture interaction. Any application area has a set of challenges regarding the required sensor technology, layout of the systems, and processing that can be tackled using various new or improved methods. I will present a collection of existing and novel methods that support processing data generated by capacitive proximity sensors. These are in the areas of sparsely distributed sensors, model-driven fitting methods, heterogeneous sensor systems, image-based processing and physiological signal processing. To evaluate the feasibility of these methods, several prototypes have been created and tested for performance and usability. Six of them are presented in detail. Based on these evaluations and the knowledge generated in the design process, I am able to classify capacitive proximity sensing in smart environments. This classification consists of a comparison to other popular sensing technologies in smart environments, the major benefits of capacitive proximity sensors, and their limitations. In order to support parties interested in developing smart environment applications using capacitive proximity sensors, I present a set of guidelines that support the decision process from technology selection to choice of processing methods

    Acoustic-based Smart Tactile Sensing in Social Robots

    Get PDF
    Mención Internacional en el título de doctorEl sentido del tacto es un componente crucial de la interacción social humana y es único entre los cinco sentidos. Como único sentido proximal, el tacto requiere un contacto físico cercano o directo para registrar la información. Este hecho convierte al tacto en una modalidad de interacción llena de posibilidades en cuanto a comunicación social. A través del tacto, podemos conocer la intención de la otra persona y comunicar emociones. De esta idea surge el concepto de social touch o tacto social como el acto de tocar a otra persona en un contexto social. Puede servir para diversos fines, como saludar, mostrar afecto, persuadir y regular el bienestar emocional y físico. Recientemente, el número de personas que interactúan con sistemas y agentes artificiales ha aumentado, principalmente debido al auge de los dispositivos tecnológicos, como los smartphones o los altavoces inteligentes. A pesar del auge de estos dispositivos, sus capacidades de interacción son limitadas. Para paliar este problema, los recientes avances en robótica social han mejorado las posibilidades de interacción para que los agentes funcionen de forma más fluida y sean más útiles. En este sentido, los robots sociales están diseñados para facilitar interacciones naturales entre humanos y agentes artificiales. El sentido del tacto en este contexto se revela como un vehículo natural que puede mejorar la Human-Robot Interaction (HRI) debido a su relevancia comunicativa en entornos sociales. Además de esto, para un robot social, la relación entre el tacto social y su aspecto es directa, al disponer de un cuerpo físico para aplicar o recibir toques. Desde un punto de vista técnico, los sistemas de detección táctil han sido objeto recientemente de nuevas investigaciones, sobre todo dedicado a comprender este sentido para crear sistemas inteligentes que puedan mejorar la vida de las personas. En este punto, los robots sociales se han convertido en dispositivos muy populares que incluyen tecnologías para la detección táctil. Esto está motivado por el hecho de que un robot puede esperada o inesperadamente tener contacto físico con una persona, lo que puede mejorar o interferir en la ejecución de sus comportamientos. Por tanto, el sentido del tacto se antoja necesario para el desarrollo de aplicaciones robóticas. Algunos métodos incluyen el reconocimiento de gestos táctiles, aunque a menudo exigen importantes despliegues de hardware que requieren de múltiples sensores. Además, la fiabilidad de estas tecnologías de detección es limitada, ya que la mayoría de ellas siguen teniendo problemas tales como falsos positivos o tasas de reconocimiento bajas. La detección acústica, en este sentido, puede proporcionar un conjunto de características capaces de paliar las deficiencias anteriores. A pesar de que se trata de una tecnología utilizada en diversos campos de investigación, aún no se ha integrado en la interacción táctil entre humanos y robots. Por ello, en este trabajo proponemos el sistema Acoustic Touch Recognition (ATR), un sistema inteligente de detección táctil (smart tactile sensing system) basado en la detección acústica y diseñado para mejorar la interacción social humano-robot. Nuestro sistema está desarrollado para clasificar gestos táctiles y localizar su origen. Además de esto, se ha integrado en plataformas robóticas sociales y se ha probado en aplicaciones reales con éxito. Nuestra propuesta se ha enfocado desde dos puntos de vista: uno técnico y otro relacionado con el tacto social. Por un lado, la propuesta tiene una motivación técnica centrada en conseguir un sistema táctil rentable, modular y portátil. Para ello, en este trabajo se ha explorado el campo de las tecnologías de detección táctil, los sistemas inteligentes de detección táctil y su aplicación en HRI. Por otro lado, parte de la investigación se centra en el impacto afectivo del tacto social durante la interacción humano-robot, lo que ha dado lugar a dos estudios que exploran esta idea.The sense of touch is a crucial component of human social interaction and is unique among the five senses. As the only proximal sense, touch requires close or direct physical contact to register information. This fact makes touch an interaction modality full of possibilities regarding social communication. Through touch, we are able to ascertain the other person’s intention and communicate emotions. From this idea emerges the concept of social touch as the act of touching another person in a social context. It can serve various purposes, such as greeting, showing affection, persuasion, and regulating emotional and physical well-being. Recently, the number of people interacting with artificial systems and agents has increased, mainly due to the rise of technological devices, such as smartphones or smart speakers. Still, these devices are limited in their interaction capabilities. To deal with this issue, recent developments in social robotics have improved the interaction possibilities to make agents more seamless and useful. In this sense, social robots are designed to facilitate natural interactions between humans and artificial agents. In this context, the sense of touch is revealed as a natural interaction vehicle that can improve HRI due to its communicative relevance. Moreover, for a social robot, the relationship between social touch and its embodiment is direct, having a physical body to apply or receive touches. From a technical standpoint, tactile sensing systems have recently been the subject of further research, mostly devoted to comprehending this sense to create intelligent systems that can improve people’s lives. Currently, social robots are popular devices that include technologies for touch sensing. This is motivated by the fact that robots may encounter expected or unexpected physical contact with humans, which can either enhance or interfere with the execution of their behaviours. There is, therefore, a need to detect human touch in robot applications. Some methods even include touch-gesture recognition, although they often require significant hardware deployments primarily that require multiple sensors. Additionally, the dependability of those sensing technologies is constrained because the majority of them still struggle with issues like false positives or poor recognition rates. Acoustic sensing, in this sense, can provide a set of features that can alleviate the aforementioned shortcomings. Even though it is a technology that has been utilised in various research fields, it has yet to be integrated into human-robot touch interaction. Therefore, in thiswork,we propose theATRsystem, a smart tactile sensing system based on acoustic sensing designed to improve human-robot social interaction. Our system is developed to classify touch gestures and locate their source. It is also integrated into real social robotic platforms and tested in real-world applications. Our proposal is approached from two standpoints, one technical and the other related to social touch. Firstly, the technical motivation of thiswork centred on achieving a cost-efficient, modular and portable tactile system. For that, we explore the fields of touch sensing technologies, smart tactile sensing systems and their application in HRI. On the other hand, part of the research is centred around the affective impact of touch during human-robot interaction, resulting in two studies exploring this idea.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Pedro Manuel Urbano de Almeida Lima.- Secretaria: María Dolores Blanco Rojas.- Vocal: Antonio Fernández Caballer

    Neuromorphic Computing Systems for Tactile Sensing Perception

    Get PDF
    Touch sensing plays an important role in humans daily life. Tasks like exploring, grasping and manipulating objects deeply rely on it. As such, Robots and hand prosthesis endowed with the sense of touch can better and more easily manipulate objects, and physically collaborate with other agents. Towards this goal, information about touched objects and surfaces has to be inferred from raw data coming from the sensors. The orientation of edges, which is employed as a pre-processing stage in both artificial vision and touch, is a key indication for object discrimination. Inspired on the encoding of edges in human first-order tactile afferents, we developed a biologically inspired, spiking models architecture that mimics human tactile perception with computational primitives that are implementable on low-power subthreshold neuromorphic hardware. The network architecture uses three layers of Leaky Integrate and Fire neurons to distinguish different edge orientations of a bar pressed on the artificial skin of the iCub robot. We demonstrated that the network architecture can learn the appropriate connectivity through unsupervised spike-based learning, and that the number and spatial distribution of sensitive areas within receptive fields are important in edge orientation discrimination. The unconstrained and random structure of the connectivity among layers can produce unbalanced activity in the output neurons, which are driven by a variable amount of synaptic inputs. We explored two different mechanisms of synaptic normalization (weights normalization and homeostasis), defining how this can be useful during the learning phase and inference phase. The network is successfully able to discriminate between 35 orientations of 36 (0 degree to 180 degree with 5 degree step increments) with homeostasis and weights normalization mechanism. Besides edge orientation discrimination, we modified the network architecture to be able to classify six different touch modalities (e.g. poke, press, grab, squeeze, push, and rolling a wheel). We demonstrated the ability of the network to learn appropriate connectivity patterns for the classification, achieving a total accuracy of 88.3 %. Furthermore, another application scenario on the tactile object shapes recognition has been considered because of its importance in robotic manipulation. We illustrated that the network architecture with 2 layers of spiking neurons was able to discriminate the tactile object shapes with accuracy 100 %, after integrating to it an array of 160 piezoresistive tactile sensors where the object shapes are applied

    Machine Learning Techniques for Device-Free Indoor Person Tracking

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore