9 research outputs found

    Augmented Reality for Real-time Navigation Assistance to Wheelchair Users with Obstacles' Management

    Get PDF
    International audienceDespite a rapid technological evolution in the field of technical assistance for people with motor disabilities, their ability to move independently in a wheelchair is still limited. New information and communication technologies (NICT) such as augmented reality (AR) are a real opportunity to integrate people with disabilities into their everyday life and work. AR can afford real-time information about buildings and locations' accessibility through mobile applications that allow the user to have a clear view of the building details. By interacting with augmented environments that appear in the real world using a smart device, users with disabilities have more control of their environment. In this paper, we propose a decision support system using AR for motor disabled people navigation assistance. We describe a real-time wheelchair navigation system equipped with geological mapping that indicates access path to a desired location, the shortest route towards it and identifies obstacles to avoid. The prototyped wheelchair navigation system was developed for use within the University of Lille campus

    An Advanced Technique for User Identification Using Partial Fingerprint

    Get PDF
    User identification is a very interesting and complex task. Invasive biometrics is based on traits uniqueness and immutability over time. In forensic field, fingerprints have always been considered an essential element for personal recognition. The traditional issue is focused on full fingerprint images matching. In this paper an advanced technique for personal recognition based on partial fingerprint is proposed. This system is based on fingerprint local analysis and micro-features, endpoints and bifurcations, extraction. The proposed approach starts from minutiae extraction from a partial fingerprint image and ends with the final matching score between fingerprint pairs. The computation of likelihood ratios in fingerprint identification is computed by trying every possible overlapping of the partial image with complete image. The first experimental results conducted on the PolyU (Hong Kong Polytechnic University) free database show an encouraging performance in terms of identification accuracy

    Internet of things: why we are not there yet

    Get PDF
    Twenty-one years past since Weiser’s vision of ubiquitous computing (UbiComp) has been written, and it is yet to be fully fulfilled despite of almost all the needed technologies already available. Still, the widespread interest in UbiComp and the results in some of its fields pose a question: why we are not there yet? It seems we miss the ‘octopus’ head. In this paper, we will try to depict the reasons why we are not there yet, from three different points of view: interaction media, device integration and applications

    Illumination Correction on Biomedical Images

    Get PDF
    RF-Inhomogeneity Correction (aka bias) artifact is an important research field in Magnetic Resonance Imaging (MRI). Bias corrupts MR images altering their illumination even though they are acquired with the most recent scanners. Homomorphic Unsharp Masking (HUM) is a filtering technique aimed at correcting illumination inhomogeneity, but it produces a halo around the edges as a side effect. In this paper a novel correction scheme based on HUM is proposed to correct the artifact mentioned above without introducing the halo. A wide experimentation has been performed on MR images. The method has been tuned and evaluated using the simulated Brainweb image database. In this framework, the approach has been compared successfully against the Guillemaud filter and the SPM2 method. Moreover, the method has been successfully applied on several real MR images of the brain (0.18 T, 1.5 T and 7 T). The description of the overall technique is reported along with the experimental results that show its effectiveness in different anatomical regions and its ability to compensate both underexposed and overexposed areas. Our approach is also effective on non-radiological images, like retinal ones

    Towards exploring future landscapes using augmented reality

    Get PDF
    With increasing pressure to better manage the environment many government and private organisations are studying the relationships between social, economic and environmental factors to determine how they can best be optimised for increased sustainability. The analysis of such relationships are undertaken using computer-based Integrated Catchment Models (ICM). These models are capable of generating multiple scenarios depicting alternative land use alternatives at a variety of temporal and spatial scales, which present (potentially) better Triple-Bottom Line (TBL) outcomes than the prevailing situation. Dissemination of this data is (for the most part) reliant on traditional, static map products however, the ability of such products to display the complexity and temporal aspects is limited and ultimately undervalues both the knowledge incorporated in the models and the capacity of stakeholders to disseminate the complexities through other means. Geovisualization provides tools and methods for disseminating large volumes of spatial (and associated non-spatial) data. Virtual Environments (VE) have been utilised for various aspects of landscape planning for more than a decade. While such systems are capable of visualizing large volumes of data at ever-increasing levels of realism, they restrict the users ability to accurately perceive the (virtual) space. Augmented Reality (AR) is a visualization technique which allows users freedom to explore a physical space and have that space augmented with additional, spatially referenced information. A review of existing mobile AR systems forms the basis of this research. A theoretical mobile outdoor AR system using Common-Of-The-Shelf (COTS) hardware and open-source software is developed. The specific requirements for visualizing land use scenarios in a mobile AR system were derived using a usability engineering approach known as Scenario-Based Design (SBD). This determined the elements required in the user interfaces resulting in the development of a low-fidelity, computer-based prototype. The prototype user interfaces were evaluated using participants from two targeted stakeholder groups undertaking hypothetical use scenarios. Feedback from participants was collected using the cognitive walk-through technique and supplemented by evaluator observations of participants physical actions. Results from this research suggest that the prototype user interfaces did provide the necessary functionality for interacting with land use scenarios. While there were some concerns about the potential implementation of "yet another" system, participants were able to envisage the benefits of visualizing land use scenario data in the physical environment

    Interacting with Augmented Environments

    No full text
    Pervasive systems augment environments by integrating information processing into everyday objects and activities. They consist of two parts: a visible part populated by animate (visitors, operators) or inanimate (AI) entities interacting with the environment through digital devices, and an invisible part composed of software objects performing specific tasks in an underlying framework. This paper shows an ongoing work from the University of Palermo''s Department of Computer Science and Engineering that addresses two issues related to simplifying and broadening augmented environment access

    C-space: Fostering new creative paradigms based on recording and sharing 'casual' videos through the internet

    No full text
    A key theme in ubiquitous computing is to create smart environments in which there is seamless integration of people, information, and physical reality. In this manuscript, we describe a set of tools that facilitate the creation of such environments, e,g, a service to transform videos recorded with mobile devices into navigable 3D scenes, a service to compute and describe the emotional processes that occur during the user interaction with such content, a service that takes into account certain dynamic needs of users in personalizing solutions for allocating their leisure time and activities, a gamified crowdsourcing application, and a set of projection-based tools for creating and interacting with augmented environments. Ultimately, our objective is have a framework that seamless integrates all these components, to foster creativity processes

    C-space:Fostering new creative paradigms based on recording and sharing 'casual' videos through the internet

    Get PDF
    \u3cp\u3eA key theme in ubiquitous computing is to create smart environments in which there is seamless integration of people, information, and physical reality. In this manuscript, we describe a set of tools that facilitate the creation of such environments, e,g, a service to transform videos recorded with mobile devices into navigable 3D scenes, a service to compute and describe the emotional processes that occur during the user interaction with such content, a service that takes into account certain dynamic needs of users in personalizing solutions for allocating their leisure time and activities, a gamified crowdsourcing application, and a set of projection-based tools for creating and interacting with augmented environments. Ultimately, our objective is have a framework that seamless integrates all these components, to foster creativity processes.\u3c/p\u3

    Combining wearable finger haptics and Augmented Reality: User evaluation using an external camera and the Microsoft HoloLens

    Get PDF
    International audienceAugmented Reality (AR) enriches our physical world with digital content and media, such as 3D models and videos, overlaying in real time the camera view of our smartphone, tablet, laptop, or glasses. Despite the recent massive interest for this technology, it is still not possible to receive rich haptic feedback when interacting with augmented environments. This lack is mainly due to the poor diffusion of suitable haptic interfaces, which should be easy to wear, lightweight, compact, and inexpensive. In this paper, we briefly review the state of the art on wearable haptics and its application in AR. Then, we present three AR use cases, considering tasks of manipulation, guidance, and gaming, using both external cameras with standard screens as well as fully-wearable solutions, using the Microsoft HoloLens. We evaluate these tasks enrolling a total of 34 subjects, analyzing performance and user experience when using a 3-DoF wearable device for the fingertip, a 2-DoF wearable device for the proximal finger phalanx, a vibrotactile ring, and a popular sensory substitution technique (interaction force displayed as a colored bar). Results show that providing haptic feedback through the wearable devices significantly improves the performance , intuitiveness, and comfort of the considered AR tasks
    corecore