1,738 research outputs found

    Eavesdropping Whilst You're Shopping: Balancing Personalisation and Privacy in Connected Retail Spaces

    Get PDF
    Physical retailers, who once led the way in tracking with loyalty cards and `reverse appends', now lag behind online competitors. Yet we might be seeing these tables turn, as many increasingly deploy technologies ranging from simple sensors to advanced emotion detection systems, even enabling them to tailor prices and shopping experiences on a per-customer basis. Here, we examine these in-store tracking technologies in the retail context, and evaluate them from both technical and regulatory standpoints. We first introduce the relevant technologies in context, before considering privacy impacts, the current remedies individuals might seek through technology and the law, and those remedies' limitations. To illustrate challenging tensions in this space we consider the feasibility of technical and legal approaches to both a) the recent `Go' store concept from Amazon which requires fine-grained, multi-modal tracking to function as a shop, and b) current challenges in opting in or out of increasingly pervasive passive Wi-Fi tracking. The `Go' store presents significant challenges with its legality in Europe significantly unclear and unilateral, technical measures to avoid biometric tracking likely ineffective. In the case of MAC addresses, we see a difficult-to-reconcile clash between privacy-as-confidentiality and privacy-as-control, and suggest a technical framework which might help balance the two. Significant challenges exist when seeking to balance personalisation with privacy, and researchers must work together, including across the boundaries of preferred privacy definitions, to come up with solutions that draw on both technology and the legal frameworks to provide effective and proportionate protection. Retailers, simultaneously, must ensure that their tracking is not just legal, but worthy of the trust of concerned data subjects.Comment: 10 pages, 1 figure, Proceedings of the PETRAS/IoTUK/IET Living in the Internet of Things Conference, London, United Kingdom, 28-29 March 201

    Analysis and use of the emotional context with wearable devices for games and intelligent assistants

    Get PDF
    In this paper, we consider the use of wearable sensors for providing affect-based adaptation in Ambient Intelligence (AmI) systems. We begin with discussion of selected issues regarding the applications of affective computing techniques. We describe our experiments for affect change detection with a range of wearable devices, such as wristbands and the BITalino platform, and discuss an original software solution, which we developed for this purpose. Furthermore, as a test-bed application for our work, we selected computer games. We discuss the state-of-the-art in affect-based adaptation in games, described in terms of the so-called affective loop. We present our original proposal of a conceptual design framework for games, called the affective game design patterns. As a proof-of-concept realization of this approach, we discuss some original game prototypes, which we have developed, involving emotion-based control and adaptation. Finally, we comment on a software framework, that we have previously developed, for context-aware systems which uses human emotional contexts. This framework provides means for implementing adaptive systems using mobile devices with wearable sensors

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    Affective Computing in the Area of Autism

    Get PDF
    The prevalence rate of Autism Spectrum Disorders (ASD) is increasing at an alarming rate (1 in 68 children). With this increase comes the need of early diagnosis of ASD, timely intervention, and understanding the conditions that could be comorbid to ASD. Understanding co-morbid anxiety and its interaction with emotion comprehension and production in ASD is a growing and multifaceted area of research. Recognizing and producing contingent emotional expressions is a complex task, which is even more difficult for individuals with ASD. First, I investigate the arousal experienced by adolescents with ASD in a group therapy setting. In this study I identify the instances in which the physiological arousal is experienced by adolescents with ASD ( have-it ), see if the facial expressions of these adolescents indicate their arousal ( show-it ), and determine if the adolescents are self-aware of this arousal or not ( know-it ). In order to establish a relationship across these three components of emotion expression and recognition, a multi-modal approach for data collection is utilized. Machine learning techniques are used to determine whether still video images of facial expressions could be used to predict Electrodermal Activity (EDA) data. Implications for the understanding of emotion and social communication difficulties in ASD, as well as future targets for intervention, are discussed. Second, it is hypothesized that a well-designed intervention technique helps in the overall development of children with ASD by improving their level of functioning. I designed and validated a mobile-based intervention designed for teaching social skills to children with ASD. I also evaluated the social skill intervention. Last, I present the research goals behind an mHealth-based screening tool for early diagnosis of ASD in toddlers. The design purpose of this tool is to help people from low-income group, who have limited access to resources. This goal is achieved without burdening the physicians, their staff, and the insurance companies

    Sensor Technologies to Manage the Physiological Traits of Chronic Pain: A Review

    Get PDF
    Non-oncologic chronic pain is a common high-morbidity impairment worldwide and acknowledged as a condition with significant incidence on quality of life. Pain intensity is largely perceived as a subjective experience, what makes challenging its objective measurement. However, the physiological traces of pain make possible its correlation with vital signs, such as heart rate variability, skin conductance, electromyogram, etc., or health performance metrics derived from daily activity monitoring or facial expressions, which can be acquired with diverse sensor technologies and multisensory approaches. As the assessment and management of pain are essential issues for a wide range of clinical disorders and treatments, this paper reviews different sensor-based approaches applied to the objective evaluation of non-oncological chronic pain. The space of available technologies and resources aimed at pain assessment represent a diversified set of alternatives that can be exploited to address the multidimensional nature of pain.Ministerio de Economía y Competitividad (Instituto de Salud Carlos III) PI15/00306Junta de Andalucía PIN-0394-2017Unión Europea "FRAIL

    Neuro-critical multimodal Edge-AI monitoring algorithm and IoT system design and development

    Get PDF
    In recent years, with the continuous development of neurocritical medicine, the success rate of treatment of patients with traumatic brain injury (TBI) has continued to increase, and the prognosis has also improved. TBI patients' condition is usually very complicated, and after treatment, patients often need a more extended time to recover. The degree of recovery is also related to prognosis. However, as a young discipline, neurocritical medicine still has many shortcomings. Especially in most hospitals, the condition of Neuro-intensive Care Unit (NICU) is uneven, the equipment has limited functionality, and there is no unified data specification. Most of the instruments are cumbersome and expensive, and patients often need to pay high medical expenses. Recent years have seen a rapid development of big data and artificial intelligence (AI) technology, which are advancing the medical IoT field. However, further development and a wider range of applications of these technologies are needed to achieve widespread adoption. Based on the above premises, the main contributions of this thesis are the following. First, the design and development of a multi-modal brain monitoring system including 8-channel electroencephalography (EEG) signals, dual-channel NIRS signals, and intracranial pressure (ICP) signals acquisition. Furthermore, an integrated display platform for multi-modal physiological data to display and analysis signals in real-time was designed. This thesis also introduces the use of the Qt signal and slot event processing mechanism and multi-threaded to improve the real-time performance of data processing to a higher level. In addition, multi-modal electrophysiological data storage and processing was realized on cloud server. The system also includes a custom built Django cloud server which realizes real-time transmission between server and WeChat applet. Based on WebSocket protocol, the data transmission delay is less than 10ms. The analysis platform can be equipped with deep learning models to realize the monitoring of patients with epileptic seizures and assess the level of consciousness of Disorders of Consciousness (DOC) patients. This thesis combines the standard open-source data set CHB-MIT, a clinical data set provided by Huashan Hospital, and additional data collected by the system described in this thesis. These data sets are merged to build a deep learning network model and develop related applications for automatic disease diagnosis for smart medical IoT systems. It mainly includes the use of the clinical data to analyze the characteristics of the EEG signal of DOC patients and building a CNN model to evaluate the patient's level of consciousness automatically. Also, epilepsy is a common disease in neuro-intensive care. In this regard, this thesis also analyzes the differences of various deep learning model between the CHB-MIT data set and clinical data set for epilepsy monitoring, in order to select the most appropriate model for the system being designed and developed. Finally, this thesis also verifies the AI-assisted analysis model.. The results show that the accuracy of the CNN network model based on the evaluation of consciousness disorder on the clinical data set reaches 82%. The CNN+STFT network model based on epilepsy monitoring reaches 90% of the accuracy rate in clinical data. Also, the multi-modal brain monitoring system built is fully verified. The EEG signal collected by this system has a high signal-to-noise ratio, strong anti-interference ability, and is very stable. The built brain monitoring system performs well in real-time and stability. Keywords: TBI, Neurocritical care, Multi-modal, Consciousness Assessment, seizures detection, deep learning, CNN, IoT

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    corecore