1,435 research outputs found

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Computational design and optimization of electro-physiological sensors

    Get PDF
    Electro-physiological sensing devices are becoming increasingly common in diverse applications. However, designing such sensors in compact form factors and for high-quality signal acquisition is a challenging task even for experts, is typically done using heuristics, and requires extensive training. Our work proposes a computational approach for designing multi-modal electro-physiological sensors. By employing an optimization-based approach alongside an integrated predictive model for multiple modalities, compact sensors can be created which offer an optimal trade-off between high signal quality and small device size. The task is assisted by a graphical tool that allows to easily specify design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. They demonstrate that generated designs can achieve an optimal balance between the size of the sensor and its signal acquisition capability, outperforming expert generated solutions

    Review of the Augmented Reality Systems for Shoulder Rehabilitation

    Get PDF
    Literature shows an increasing interest for the development of augmented reality (AR) applications in several fields, including rehabilitation. Current studies show the need for new rehabilitation tools for upper extremity, since traditional interventions are less effective than in other body regions. This review aims at: Studying to what extent AR applications are used in shoulder rehabilitation, examining wearable/non-wearable technologies employed, and investigating the evidence supporting AR effectiveness. Nine AR systems were identified and analyzed in terms of: Tracking methods, visualization technologies, integrated feedback, rehabilitation setting, and clinical evaluation. Our findings show that all these systems utilize vision-based registration, mainly with wearable marker-based tracking, and spatial displays. No system uses head-mounted displays, and only one system (11%) integrates a wearable interface (for tactile feedback). Three systems (33%) provide only visual feedback; 66% present visual-audio feedback, and only 33% of these provide visual-audio feedback, 22% visual-audio with biofeedback, and 11% visual-audio with haptic feedback. Moreover, several systems (44%) are designed primarily for home settings. Three systems (33%) have been successfully evaluated in clinical trials with more than 10 patients, showing advantages over traditional rehabilitation methods. Further clinical studies are needed to generalize the obtained findings, supporting the effectiveness of the AR applications

    Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures

    No full text
    Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfĂŒgbare und leicht zugĂ€ngliche FlĂ€che fĂŒr Interaktion. JĂŒngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer GerĂ€te gefĂŒhrt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten EpidermisgerĂ€te haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dĂŒnn, oft dĂŒnner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles VerstĂ€ndnis von EpidermisgerĂ€ten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen AnsĂ€tzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinĂ€re Analyse von EpidermisgerĂ€ten die Designziele und Herausforderungen, die angegangen werden mĂŒssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale GerĂ€te unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des GerĂ€ts und den taktilen Empfindlichkeitsschwellen sowie der FĂ€higkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen fĂŒr die Realisierung epidermaler GerĂ€te ab. Zweitens trĂ€gt diese Thesis zu neuartigen EpidermisgerĂ€ten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trĂ€gt zur Herstellung und zum Design neuartiger EpidermisgerĂ€te bei, die hochgradig an die Haut angepasst sind und BerĂŒhrungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese GerĂ€te können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum fĂŒr die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir prĂ€sentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und GrĂ¶ĂŸen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestĂ€tigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-VerhĂ€ltnis erreicht und eine hohe rĂ€umliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da EpidermisgerĂ€te in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestĂŒtzten Design von multimodalen EpidermisgerĂ€ten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten GerĂ€te können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale AktivitĂ€t) messen. DarĂŒber hinaus stellen wir eine computergestĂŒtzte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte GerĂ€tedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der FĂ€higkeit zur Erfassung physiologischer Signale und der GrĂ¶ĂŸe des GerĂ€ts. Das grafische Tool ermöglicht die einfache Festlegung von DesignprĂ€ferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinĂ€ren Perspektive einen Fahrplan fĂŒr zukĂŒnftige Forschung in diesem Bereich, indem wir die nĂ€chsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trĂ€gt diese Arbeit zu einem ganzheitlichen VerstĂ€ndnis von EpidermisgerĂ€ten bei: Sie liefert ein empirisches und konzeptionelles VerstĂ€ndnis sowie technische Einblicke durch BeitrĂ€ge zu DIY (Do-It-Yourself), schneller Fertigung und computergestĂŒtzten Entwurfstechniken
    • 

    corecore