1,011 research outputs found

    Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey

    Full text link
    Due to the fact that roughly sixty percent of the human body is essentially composed of water, the human body is inherently a conductive object, being able to, firstly, form an inherent electric field from the body to the surroundings and secondly, deform the distribution of an existing electric field near the body. Body-area capacitive sensing, also called body-area electric field sensing, is becoming a promising alternative for wearable devices to accomplish certain tasks in human activity recognition and human-computer interaction. Over the last decade, researchers have explored plentiful novel sensing systems backed by the body-area electric field. On the other hand, despite the pervasive exploration of the body-area electric field, a comprehensive survey does not exist for an enlightening guideline. Moreover, the various hardware implementations, applied algorithms, and targeted applications result in a challenging task to achieve a systematic overview of the subject. This paper aims to fill in the gap by comprehensively summarizing the existing works on body-area capacitive sensing so that researchers can have a better view of the current exploration status. To this end, we first sorted the explorations into three domains according to the involved body forms: body-part electric field, whole-body electric field, and body-to-body electric field, and enumerated the state-of-art works in the domains with a detailed survey of the backed sensing tricks and targeted applications. We then summarized the three types of sensing frontends in circuit design, which is the most critical part in body-area capacitive sensing, and analyzed the data processing pipeline categorized into three kinds of approaches. Finally, we described the challenges and outlooks of body-area electric sensing

    EdgeGlass: Exploring Tapping Performance on Smart Glasses while Sitting and Walking

    Get PDF
    Department of Human Factors EngineeringCurrently, smart glasses allow only touch sensing area which supports front mounted touch pads. However, touches on top, front and bottom sides of glass mounted touchpad is not yet explored. We made a customized touch sensor (length: 5-6 cm, height: 1 cm, width: 0.5 cm) featuring the sensing on its top, front, and bottom surfaces. For doing that, we have used capacitive touch sensing technology (MPR121 chips) with an electrode size of ~4.5 mm square, which is typical in the modern touchscreens. We have created a hardware system which consists of a total of 48 separate touch sensors. We investigated the interaction technique by it for both the sitting and walking situation, using a single finger sequential tapping and a pair finger simultaneous tapping. We have divided each side into three equal target areas and this separation made a total of 36 combinations. Our quantitative result showed that pair finger simultaneous tapping touches were faster, less error-prone in walking condition, compared to single finger sequential tapping into walking condition. Whereas, single finger sequence tapping touches were slower, but less error-prone in sitting condition, compared to pair simultaneous tapping in sitting condition. However, single finger sequential tapping touches were slower, much less error-prone in sitting condition compared to walking. Interestingly, double finger tapping touches had similar performance result in terms of both, error rate and completion time, in both sitting and walking conditions. Mental, physical, performance, effort did not have any effect on any temporal tapping???s and body poses experience of workload. In case of the parameter of temporal demand, for single finger sequential tapping mean temporal (time pressure) workload demand was higher than pair finger simultaneous tapping but body poses did not affect temporal (time pressure) workload for both of the sequential and simultaneous tapping type. In case of the parameter of frustration, the result suggested that mean frustration workload was higher for single finger sequential tapping experienced by the participants compared to pair finger simultaneous tapping and among body poses, walking experienced higher frustration mean workload than sitting. The subjective measure of overall workload during the performance study showed no significant difference between both independent variable: body pose (sitting and walking) and temporal tapping (single finger sequential tapping and pair finger simultaneous tapping).ope

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion

    Wearable Smart Rings for Multi-Finger Gesture Recognition Using Supervised Learning

    Get PDF
    This thesis presents a wearable, smart ring with an integrated Bluetooth low-energy (BLE) module. The system uses an accelerometer and a gyroscope to collect fingers motion data. A prototype was manufactured, and its performance was tested. To detect complex finger movements, two rings are worn on the point and thumb fingers while performing the gestures. Nine pre-defined finger movements were introduced to verify the feasibility of the proposed method. Data pre-processing techniques, including normalization, statistical feature extraction, random forest recursive feature elimination (RF-RFE), and k-nearest neighbors sequential forward floating selection (KNN-SFFS), were applied to select well-distinguished feature vectors to enhance gesture recognition accuracy. Three supervised machine learning algorithms were used for gesture classification purposes, namely Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naive Bayes (NB). We demonstrated that when utilizing the KNN-SFFS recommended features as the machine learning input, our proposed finger gesture recognition approach not only significantly decreases the dimension of the feature vector, results in faster response time and prevents overfitted model, but also provides approximately similar machine learning prediction accuracy compared to when all elements of feature vectors were used. By using the KNN as the primary classifier, the system can accurately recognize six one-finger and three two-finger gestures with 97.1% and 97.0% accuracy, respectively

    High-accuracy Motion Estimation for MEMS Devices with Capacitive Sensors

    Full text link
    With the development of micro-electro-mechanical system (MEMS) technologies, emerging MEMS applications such as in-situ MEMS IMU calibration, medical imaging via endomicroscopy, and feedback control for nano-positioning and laser scanning impose needs for especially accurate measurements of motion using on-chip sensors. Due to their advantages of simple fabrication and integration within system level architectures, capacitive sensors are a primary choice for motion tracking in those applications. However, challenges arise as often the capacitive sensing scheme in those applications is unconventional due to the nature of the application and/or the design and fabrication restrictions imposed, and MEMS sensors are traditionally susceptible to accuracy errors, as from nonlinear sensor behavior, gain and bias drift, feedthrough disturbances, etc. Those challenges prevent traditional sensing and estimation techniques from fulfilling the accuracy requirements of the candidate applications. The goal of this dissertation is to provide a framework for such MEMS devices to achieve high-accuracy motion estimation, and specifically to focus on innovative sensing and estimation techniques that leverage unconventional capacitive sensing schemes to improve estimation accuracy. Several research studies with this specific aim have been conducted, and the methodologies, results and findings are presented in the context of three applications. The general procedure of the study includes proposing and devising the capacitive sensing scheme, deriving a sensor model based on first principles of capacitor configuration and sensing circuit, analyzing the sensor’s characteristics in simulation with tuning of key parameters, conducting experimental investigations by constructing testbeds and identifying actuation and sensing models, formulating estimation schemes is to include identified actuation dynamics and sensor models, and validating the estimation schemes and evaluating their performance against ground truth measurements. The studies show that the proposed techniques are valid and effective, as the estimation schemes adopted either fulfill the requirements imposed or improve the overall estimation performance. Highlighted results presented in this dissertation include a scale factor calibration accuracy of 286 ppm for a MEMS gyroscope (Chapter 3), an improvement of 15.1% of angular displacement estimation accuracy by adopting a threshold sensing technique for a scanning micro-mirror (Chapter 4), and a phase shift prediction error of 0.39 degree for a electrostatic micro-scanner using shared electrodes for actuation and sensing (Chapter 5).PHDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147568/1/davidsky_1.pd

    Novel active sweat pores based liveness detection techniques for fingerprint biometrics

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Liveness detection in automatic fingerprint identification systems (AFIS) is an issue which still prevents its use in many unsupervised security applications. In the last decade, various hardware and software solutions for the detection of liveness from fingerprints have been proposed by academic research groups. However, the proposed methods have not yet been practically implemented with existing AFIS. A large amount of research is needed before commercial AFIS can be implemented. In this research, novel active pore based liveness detection methods were proposed for AFIS. These novel methods are based on the detection of active pores on fingertip ridges, and the measurement of ionic activity in the sweat fluid that appears at the openings of active pores. The literature is critically reviewed in terms of liveness detection issues. Existing fingerprint technology, and hardware and software solutions proposed for liveness detection are also examined. A comparative study has been completed on the commercially and specifically collected fingerprint databases, and it was concluded that images in these datasets do not contained any visible evidence of liveness. They were used to test various algorithms developed for liveness detection; however, to implement proper liveness detection in fingerprint systems a new database with fine details of fingertips is needed. Therefore a new high resolution Brunel Fingerprint Biometric Database (B-FBDB) was captured and collected for this novel liveness detection research. The first proposed novel liveness detection method is a High Pass Correlation Filtering Algorithm (HCFA). This image processing algorithm has been developed in Matlab and tested on B-FBDB dataset images. The results of the HCFA algorithm have proved the idea behind the research, as they successfully demonstrated the clear possibility of liveness detection by active pore detection from high resolution images. The second novel liveness detection method is based on the experimental evidence. This method explains liveness detection by measuring the ionic activities above the sample of ionic sweat fluid. A Micro Needle Electrode (MNE) based setup was used in this experiment to measure the ionic activities. In results, 5.9 pC to 6.5 pC charges were detected with ten NME positions (50μm to 360 μm) above the surface of ionic sweat fluid. These measurements are also a proof of liveness from active fingertip pores, and this technique can be used in the future to implement liveness detection solutions. The interaction of NME and ionic fluid was modelled in COMSOL multiphysics, and the effect of electric field variations on NME was recorded at 5μm -360μm positions above the ionic fluid.This study is funded by the University of Sindh, Jamshoro, Pakistan and the Higher Education Commission of Pakistan

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects

    Exploring human-object interaction through force vector measurement

    Get PDF
    Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 101-107).I introduce SCALE, a project aiming to further understand Human-Object Interaction through the real-time analysis of force vector signals, which I have defined as "Force-based Interaction" in this thesis. Force conveys fundamental information in Force-based Interaction, including force intensity, its direction, and object weight - information otherwise difficult to be accessed or inferred from other sensing modalities. To explore the design space of force-based interaction, I have developed the SCALE toolkit, which is composed of modularized 3d-axis force sensors and application APIs. In collaboration with big industry companies, this system has been applied to a variety of application domains and settings, including a retail store, a smart home and a farmers market. In this thesis, I have proposed a base system SCALE, and two additional advanced projects titled KI/OSK and DepthTouch, which build upon the SCALE project.by Takatoshi Yoshida.S.M.S.M. Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Science

    SENSING MECHANISM AND APPLICATION OF MECHANICAL STRAIN SENSOR: A MINI-REVIEW

    Get PDF
    This study reviews the potential of flexible strain sensors based on nanomaterials such as carbon nanotubes (CNTs), graphene, and metal nanowires (NWs). These nanomaterials have excellent flexibility, conductivity, and mechanical properties, which enable them to be integrated into clothing or attached to the skin for the real-time monitoring of various activities. However, the main challenge is balancing high stretchability and sensitivity. This paper explains the basic concept of strain sensors that can convert mechanical deformation into electrical signals. Moreover, this paper focuses on simple, flexible, and stretchable resistive and capacitive sensors. It also discusses the important factors in choosing materials and fabrication methods, emphasizing the crucial role of suitable polymers in high-performance strain sensing. This study reviews the fabrication processes, mechanisms, performance, and applications of stretchable strain sensors in detail. It analyzes key aspects, such as sensitivity, stretchability, linearity, response time, and durability. This review provides useful insights into the current status and prospects of stretchable strain sensors in wearable technology and human–machine interfaces
    corecore