26 research outputs found

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Secure and Usable Behavioural User Authentication for Resource-Constrained Devices

    Full text link
    Robust user authentication on small form-factor and resource-constrained smart devices, such as smartphones, wearables and IoT remains an important problem, especially as such devices are increasingly becoming stores of sensitive personal data, such as daily digital payment traces, health/wellness records and contact e-mails. Hence, a secure, usable and practical authentication mechanism to restrict access to unauthorized users is a basic requirement for such devices. Existing user authentication methods based on passwords pose a mental demand on the user's part and are not secure. Behavioural biometric based authentication provides an attractive means, which can replace passwords and provide high security and usability. To this end, we devise and study novel schemes and modalities and investigate how behaviour based user authentication can be practically realized on resource-constrained devices. In the first part of the thesis, we implemented and evaluated the performance of touch based behavioural biometric on wearables and smartphones. Our results show that touch based behavioural authentication can yield very high accuracy and a small inference time without imposing huge resource requirements on the wearable devices. The second part of the thesis focus on designing a novel hybrid scheme named BehavioCog. The hybrid scheme combined touch gestures (behavioural biometric) with challenge-response based cognitive authentication. Touch based behavioural authentication is highly usable but is prone to observation attacks. While cognitive authentication schemes are highly resistant to observation attacks but not highly usable. The hybrid scheme improves the usability of cognitive authentication and improves the security of touch based behavioural biometric at the same time. Next, we introduce and evaluate a novel behavioural biometric modality named BreathPrint based on an acoustics obtained from individual's breathing gestures. Breathing based authentication is highly usable and secure as it only requires a person to breathe and low observability makes it secure against spoofing and replay attacks. Our investigation with BreathPrint showed that it could be used for efficient real-time authentication on multiple standalone smart devices especially using deep learning models

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Recent Advances in Wearable Sensing Technologies

    Get PDF
    Wearable sensing technologies are having a worldwide impact on the creation of novel business opportunities and application services that are benefiting the common citizen. By using these technologies, people have transformed the way they live, interact with each other and their surroundings, their daily routines, and how they monitor their health conditions. We review recent advances in the area of wearable sensing technologies, focusing on aspects such as sensor technologies, communication infrastructures, service infrastructures, security, and privacy. We also review the use of consumer wearables during the coronavirus disease 19 (COVID-19) pandemic caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and we discuss open challenges that must be addressed to further improve the efficacy of wearable sensing systems in the future

    Automatic emotion recognition in clinical scenario: a systematic review of methods

    Get PDF
    none4Automatic emotion recognition has powerful opportunities in the clinical field, but several critical aspects are still open, such as heterogeneity of methodologies or technologies tested mainly on healthy people. This systematic review aims to survey automatic emotion recognition systems applied in real clinical contexts, to deeply analyse clinical and technical aspects, how they were addressed, and relationships among them. The literature review was conducted on: IEEEXplore, ScienceDirect, Scopus, PubMed, ACM. Inclusion criteria were the presence of an automatic emotion recognition algorithm and the enrollment of at least 2 patients in the experimental protocol. The review process followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Moreover, the works were analysed according to a reference model to deeply examine both clinical and technical topics. 52 scientific papers passed inclusion criteria. Most clinical scenarios involved neurodevelopmental, neurological and psychiatric disorders with the aims of diagnosing, monitoring, or treating emotional symptoms. The most adopted signals are video and audio, while supervised shallow learning is mostly used for emotion recognition. A poor study design, tiny samples, and the absence of a control group emerged as methodological weaknesses. Heterogeneity of performance metrics, datasets and algorithms challenges results comparability, robustness, reliability and reproducibility.openPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria GabriellaPepa, Lucia; Spalazzi, Luca; Capecci, Marianna; Ceravolo, Maria Gabriell

    THE FUTURE OF DIGITAL WORK - USE CASES FOR AUGMENTED REALITY GLASSES

    Get PDF
    Microsoft’s HoloLens enables true augmented reality (AR) by placing virtual objects within the real world. This paper aims at presenting trades (based on ISIC) that can benefit from AR as well as possible use cases. Firstly, the authors conducted a systematic literature search to identi-fy relevant papers. Six databases (including EBSCOhost, ScienceDirect and SpringerLink) were scanned for the term “HoloLens”. Out of 680 results, two researchers identified 150 articles as thematically relevant. Secondly, these papers were analysed utilising qualitative content analy-sis. Findings reveal 26 trades where AR glasses are in use for practice or research purposes. The most frequent are human health, education and research. In addition, we provide a cata-logue of 7 main use cases, such as Process Guidance or Data Access and Visualisation as well as 27 sub use cases addressing corresponding functionalities in more detail. The results of this paper are trades and application scenarios for AR glasses. Thus, this article contributes to re-search in the field of service systems design, especially AR glasses-based service systems, and provide evidence for the future of digital work

    Advanced Sensing and Image Processing Techniques for Healthcare Applications

    Get PDF
    This Special Issue aims to attract the latest research and findings in the design, development and experimentation of healthcare-related technologies. This includes, but is not limited to, using novel sensing, imaging, data processing, machine learning, and artificially intelligent devices and algorithms to assist/monitor the elderly, patients, and the disabled population

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private AudiokanĂ€le anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme wĂ€hrend dem Pendeln oder zum freihĂ€ndigen Telefonieren. Dank diesem eindeutigen primĂ€ren Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stĂ€rker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese GerĂ€te sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die FunktionalitĂ€t von Kopfhörern zu erweitern. Die rĂ€umliche NĂ€he von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform fĂŒr die Erfassung einer Vielzahl von Eigenschaften, Prozessen und AktivitĂ€ten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollstĂ€ndig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche SensorikansĂ€tze erforscht werden, welche die Erkennung von bisher unzugĂ€nglichen PhĂ€nomenen ermöglichen. Durch die EinfĂŒhrung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher FĂ€higkeiten zu etablieren. Um eine fundierte Grundlage fĂŒr die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-PhĂ€nomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und AktivitĂ€t, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesĂŒnderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. DarĂŒber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher AktivitĂ€tserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige AnsĂ€tze. DarĂŒber hinaus wird ein Regressionsmodell eingefĂŒhrt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten fĂŒr Forschung, welche sich nahtlos in das tĂ€gliche Leben integrieren lĂ€sst und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lĂ€sst, um neuartige PhĂ€nomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusĂ€tzliches interaktives Medium eingesetzt, welches eine freihĂ€ndige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mĂŒndet die Dissertation in einer offenen Hard- und Software-Plattform fĂŒr Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die fĂŒr verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die EinstiegshĂŒrden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trĂ€gt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. DarĂŒber hinaus trĂ€gt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen fĂŒr Earables bei. Durch diese Forschung schließt die Dissertation die LĂŒcke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. DarĂŒber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten GerĂ€te zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher SensorfĂ€higkeiten bietet, um Eigenschaften, Prozesse und AktivitĂ€ten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen FĂ€higkeiten wird somit zunehmend realer

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parents’ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Affective Computing for Emotion Detection using Vision and Wearable Sensors

    Get PDF
    The research explores the opportunities, challenges, limitations, and presents advancements in computing that relates to, arises from, or deliberately influences emotions (Picard, 1997). The field is referred to as Affective Computing (AC) and is expected to play a major role in the engineering and development of computationally and cognitively intelligent systems, processors and applications in the future. Today the field of AC is bolstered by the emergence of multiple sources of affective data and is fuelled on by developments under various Internet of Things (IoTs) projects and the fusion potential of multiple sensory affective data streams. The core focus of this thesis involves investigation into whether the sensitivity and specificity (predictive performance) of AC, based on the fusion of multi-sensor data streams, is fit for purpose? Can such AC powered technologies and techniques truly deliver increasingly accurate emotion predictions of subjects in the real world? The thesis begins by presenting a number of research justifications and AC research questions that are used to formulate the original thesis hypothesis and thesis objectives. As part of the research conducted, a detailed state of the art investigations explored many aspects of AC from both a scientific and technological perspective. The complexity of AC as a multi-sensor, multi-modality, data fusion problem unfolded during the state of the art research and this ultimately led to novel thinking and origination in the form of the creation of an AC conceptualised architecture that will act as a practical and theoretical foundation for the engineering of future AC platforms and solutions. The AC conceptual architecture developed as a result of this research, was applied to the engineering of a series of software artifacts that were combined to create a prototypical AC multi-sensor platform known as the Emotion Fusion Server (EFS) to be used in the thesis hypothesis AC experimentation phases of the research. The thesis research used the EFS platform to conduct a detailed series of AC experiments to investigate if the fusion of multiple sensory sources of affective data from sensory devices can significantly increase the accuracy of emotion prediction by computationally intelligent means. The research involved conducting numerous controlled experiments along with the statistical analysis of the performance of sensors for the purposes of AC, the findings of which serve to assess the feasibility of AC in various domains and points to future directions for the AC field. The AC experiments data investigations conducted in relation to the thesis hypothesis used applied statistical methods and techniques, and the results, analytics and evaluations are presented throughout the two thesis research volumes. The thesis concludes by providing a detailed set of formal findings, conclusions and decisions in relation to the overarching research hypothesis on the sensitivity and specificity of the fusion of vision and wearables sensor modalities and offers foresights and guidance into the many problems, challenges and projections for the AC field into the future
    corecore