110 research outputs found

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    Remote Collaborative BIM-based Mixed Reality Approach for Supporting Facilities Management Field Tasks

    Get PDF
    Facilities Management (FM) day-to-day tasks require suitable methods to facilitate work orders and improve performance by better collaboration between the office and the field. Building Information Modeling (BIM) provides opportunities to support collaboration and to improve the efficiency of Computerized Maintenance Management Systems (CMMSs) by sharing building information between different applications/users throughout the lifecycle of the facility. However, manual retrieval of building element information can be challenging and time consuming for field workers during FM operations. Mixed Reality (MR) is a visualization technique that can be used to improve the visual perception of the facility by superimposing 3D virtual objects and textual information on top of the view of real-world building objects. The objectives of this research are: (1) investigating an automated method to capture and record task-related data (e.g., defects) with respect to a georeferenced BIM model and share them directly with the remote office based on the field worker point of view in mobile situations; (2) investigating the potential of using MR, BIM, and sensory data for FM tasks to provide improved visualization and perception that satisfy the needs of the facility manager at the office and the field workers with less visual and mental disturbance; and (3) developing an effective method for interactive visual collaboration to improve FM field tasks. This research discusses the development of a collaborative BIM-based MR approach to support facilities field tasks. The research framework integrates multisource facilities information, BIM models, and hybrid tracking in an MR-based setting to retrieve information based on time (e.g., inspection schedule) and the location of the field worker, visualize inspection and maintenance operations, and support remote collaboration and visual communication between the field worker and the manager at the office. The field worker uses an Augmented Reality (AR) application installed on his/her tablet. The manager at the office uses an Immersive Augmented Virtuality (IAV) application installed on a desktop computer. Based on the field worker location, as well as the inspection or maintenance schedule, the field worker is assigned work orders and instructions from the office. Other sensory data (e.g., infrared thermography) can provide additional layers of information by augmenting the actual view of the field worker and supporting him/her in making effective decisions about existing and potential problems while communicating with the office in an Interactive Virtual Collaboration (IVC) mode. The contributions of this research are (1) developing a MR framework for facilities management which has a field AR module and an office IAV module. These modules can be used independently or combined using remote IVC, (2) developing visualization methods for MR including the virtual hatch and multilayer views to enhance visual depth and context perception, (3) developing methods for AR and IAV modeling including BIM-based data integration and customization suitable for each MR method, and (4) enhancing indoor tracking for AR FM systems by developing a hybrid tracking method. To investigate the applicability of the research method, a prototype system called Collaborative BIM-based Markerless Mixed Reality Facility Management System (CBIM3R-FMS) is developed and tested in a case study. The usability testing and validation show that the proposed methods have high potential to improve FM field tasks

    Pushing the limits of remote RF sensing by reading lips under the face mask

    Get PDF
    The problem of Lip-reading has become an important research challenge in recent years. The goal is to recognise speech from lip movements. Most of the Lip-reading technologies developed so far are camera-based, which require video recording of the target. However, these technologies have well-known limitations of occlusion and ambient lighting with serious privacy concerns. Furthermore, vision-based technologies are not useful for multi-modal hearing aids in the coronavirus (COVID-19) environment, where face masks have become a norm. This paper aims to solve the fundamental limitations of camera-based systems by proposing a radio frequency (RF) based Lip-reading framework, having an ability to read lips under face masks. The framework employs Wi-Fi and radar technologies as enablers of RF sensing based Lip-reading. A dataset comprising of vowels A, E, I, O, U and empty (static/closed lips) is collected using both technologies, with a face mask. The collected data is used to train machine learning (ML) and deep learning (DL) models. A high classification accuracy of 95% is achieved on the Wi-Fi data utilising neural network (NN) models. Moreover, similar accuracy is achieved by VGG16 deep learning model on the collected radar-based dataset

    Near field sensing and antenna design for wireless body area network

    Get PDF
    PhD ThesisWireless body area network (WBAN) has emerged in recent years as a special class of wireless sensor network; hence, WBAN inherits the wireless sensor network challenges of interference by passive objects in indoor environments. However, attaching wireless nodes to a person’s body imposes a unique challenge, presented by continuous changes in the working environment, due to the normal activities of the monitored personnel. Basic activities, like sitting on a metallic chair or standing near a metallic door, drastically change the antenna behaviour when the metallic object is within the antenna near field. Although antenna coupling with the human body has been investigated by many recent studies, the coupling of the WBAN node antenna with other objects within the surrounding environment has not been thoroughly studied. To address the problems above, the thesis investigates the state-of-the art of WBAN, eximanes the influence of metallic object near an antenna through experimental studies and proposes antenna design and their applications for near field environments. This thesis philosophy for the previously mentioned challenge is to examine and improve the WBAN interaction with its surrounding by enabling the WBAN node to detect nearby objects based solely on change in antenna measurements. The thesis studies the interference caused by passive objects on WBAN node antenna and extracts relevant features to sense the object presence within the near field, and proposes new design of WBAN antenna suitable for this purpose. The major contributions of this study can be summarised as follows. First, it observes and defines the changes in the return loss of a narrow band antenna when a metallic object is introduced in its near field. Two methods were proposed to detect the object, based on the refelction coefficient and transmission coefficient of an antenna in free space. Then, the thesis introduces a new antenna design that conforms to the WBAN requirements of size, while achieving very low sensitivity to human body. This was achieved through combining two opposite Vivaldi shapes on one PCB and using a metallic sheet to act as a reflector, which minimised the antenna coupling with the human body and reduced the radiation pattern towards the body. Finally, the proposed antennas were tested on several human body parts with nearby metallic objects, to compare the change in antenna s-parameters due to presence of the human body and presence of the metallic object. Based on the measurements, basic statistical indicators and Principal Component Analysis were proposed to detect object presense and estimate its distance. In conclusion, the thesis successfully shows WBAN antenna’s ability to detect nearby metallic objects through a set of proposed indicators and novel antenna design. The thesis is wrapped up by the suggestion to investigate time domain features and modulated signal for future work in WBAN near field sensing

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Autonomous wheelchair with a smart driving mode and a Wi-Fi positioning system

    Get PDF
    Wheelchairs are an important aid that enhances the mobility of people with several types of disabilities. Therefore, there has been considerable research and development on wheelchairs to meet the needs of the disabled. Since the early manual wheelchairs to their more recent electric powered counterparts, advancements have focused on improving autonomy in mobility. Other developments, such as Internet advancements, have developed the concept of the Internet of Things (IoT). This is a promising area that has been studied to enhance the independent operation of the electrical wheelchairs by enabling autonomous navigation and obstacle avoidance. This dissertation describes shortly the design of an autonomous wheelchair of the IPL/IT (Instituto Politécnico de Leiria/Instituto de Telecomunicações) with smart driving features for persons with visual impairments. The objective is to improve the prototype of an intelligent wheelchair. The first prototype of the wheelchair was built to control it by voice, ocular movements, and GPS (Global Positioning System). Furthermore, the IPL/IT wheelchair acquired a remote control feature which could prove useful for persons with low levels of visual impairment. This tele-assistance mode will be helpful to the family of the wheelchair user or, simply, to a health care assistant. Indoor and outdoor positioning systems, with printed directional Wi-Fi antennas, have been deployed to enable a precise location of our wheelchair. The underlying framework for the wheelchair system is the IPL/IT low cost autonomous wheelchair prototype that is based on IoT technology for improved affordability

    Towards the internet of smart clothing: a review on IoT wearables and garments for creating intelligent connected e-textiles

    Get PDF
    [Abstract] Technology has become ubiquitous, it is all around us and is becoming part of us. Togetherwith the rise of the Internet of Things (IoT) paradigm and enabling technologies (e.g., Augmented Reality (AR), Cyber-Physical Systems, Artificial Intelligence (AI), blockchain or edge computing), smart wearables and IoT-based garments can potentially have a lot of influence by harmonizing functionality and the delight created by fashion. Thus, smart clothes look for a balance among fashion, engineering, interaction, user experience, cybersecurity, design and science to reinvent technologies that can anticipate needs and desires. Nowadays, the rapid convergence of textile and electronics is enabling the seamless and massive integration of sensors into textiles and the development of conductive yarn. The potential of smart fabrics, which can communicate with smartphones to process biometric information such as heart rate, temperature, breathing, stress, movement, acceleration, or even hormone levels, promises a new era for retail. This article reviews the main requirements for developing smart IoT-enabled garments and shows smart clothing potential impact on business models in the medium-term. Specifically, a global IoT architecture is proposed, the main types and components of smart IoT wearables and garments are presented, their main requirements are analyzed and some of the most recent smart clothing applications are studied. In this way, this article reviews the past and present of smart garments in order to provide guidelines for the future developers of a network where garments will be connected like other IoT objects: the Internet of Smart Clothing.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; ED341D R2016/012Xunta de Galicia; ED431G/01Agencia Estatal de Investigación de España; TEC2013-47141-C4-1-RAgencia Estatal de Investigación de España; TEC2016-75067-C4-1-RAgencia Estatal de Investigación de España; TEC2015-69648-RED
    corecore