963 research outputs found

    Overcoming barriers and increasing independence: service robots for elderly and disabled people

    Get PDF
    This paper discusses the potential for service robots to overcome barriers and increase independence of elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly people and advances in technology which will make new uses possible and provides suggestions for some of these new applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses the complementarity of assistive service robots and personal assistance and considers the types of applications and users for which service robots are and are not suitable

    GUARDIANS final report

    Get PDF
    Emergencies in industrial warehouses are a major concern for firefghters. The large dimensions together with the development of dense smoke that drastically reduces visibility, represent major challenges. The Guardians robot swarm is designed to assist fire fighters in searching a large warehouse. In this report we discuss the technology developed for a swarm of robots searching and assisting fire fighters. We explain the swarming algorithms which provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also one of the means to locate the robots and humans. Thus the robot swarm is able to locate itself and provide guidance information to the humans. Together with the re ghters we explored how the robot swarm should feed information back to the human fire fighter. We have designed and experimented with interfaces for presenting swarm based information to human beings

    Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns

    Get PDF
    We introduce Deep Thermal Imaging, a new approach for close-range automatic recognition of materials to enhance the understanding of people and ubiquitous technologies of their proximal environment. Our approach uses a low-cost mobile thermal camera integrated into a smartphone to capture thermal textures. A deep neural network classifies these textures into material types. This approach works effectively without the need for ambient light sources or direct contact with materials. Furthermore, the use of a deep learning network removes the need to handcraft the set of features for different materials. We evaluated the performance of the system by training it to recognise 32 material types in both indoor and outdoor environments. Our approach produced recognition accuracies above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584 images of 17 outdoor materials. We conclude by discussing its potentials for real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing System

    An Indoor Navigation System Using a Sensor Fusion Scheme on Android Platform

    Get PDF
    With the development of wireless communication networks, smart phones have become a necessity for people’s daily lives, and they meet not only the needs of basic functions for users such as sending a message or making a phone call, but also the users’ demands for entertainment, surfing the Internet and socializing. Navigation functions have been commonly utilized, however the navigation function is often based on GPS (Global Positioning System) in outdoor environments, whereas a number of applications need to navigate indoors. This paper presents a system to achieve high accurate indoor navigation based on Android platform. To do this, we design a sensor fusion scheme for our system. We divide the system into three main modules: distance measurement module, orientation detection module and position update module. We use an efficient way to estimate the stride length and use step sensor to count steps in distance measurement module. For orientation detection module, in order to get the optimal result of orientation, we then introduce Kalman filter to de-noise the data collected from different sensors. In the last module, we combine the data from the previous modules and calculate the current location. Results of experiments show that our system works well and has high accuracy in indoor situations

    Multi-robot team formation control in the GUARDIANS project

    Get PDF
    Purpose The GUARDIANS multi-robot team is to be deployed in a large warehouse in smoke. The team is to assist firefighters search the warehouse in the event or danger of a fire. The large dimensions of the environment together with development of smoke which drastically reduces visibility, represent major challenges for search and rescue operations. The GUARDIANS robots guide and accompany the firefighters on site whilst indicating possible obstacles and the locations of danger and maintaining communications links. Design/methodology/approach In order to fulfill the aforementioned tasks the robots need to exhibit certain behaviours. Among the basic behaviours are capabilities to stay together as a group, that is, generate a formation and navigate while keeping this formation. The control model used to generate these behaviours is based on the so-called social potential field framework, which we adapt to the specific tasks required for the GUARDIANS scenario. All tasks can be achieved without central control, and some of the behaviours can be performed without explicit communication between the robots. Findings The GUARDIANS environment requires flexible formations of the robot team: the formation has to adapt itself to the circumstances. Thus the application has forced us to redefine the concept of a formation. Using the graph-theoretic terminology, we can say that a formation may be stretched out as a path or be compact as a star or wheel. We have implemented the developed behaviours in simulation environments as well as on real ERA-MOBI robots commonly referred to as Erratics. We discuss advantages and shortcomings of our model, based on the simulations as well as on the implementation with a team of Erratics.</p

    Unlimited-wokspace teleoperation

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Mechanical Engineering, Izmir, 2012Includes bibliographical references (leaves: 100-105)Text in English; Abstract: Turkish and Englishxiv, 109 leavesTeleoperation is, in its brief description, operating a vehicle or a manipulator from a distance. Teleoperation is used to reduce mission cost, protect humans from accidents that can be occurred during the mission, and perform complex missions for tasks that take place in areas which are difficult to reach or dangerous for humans. Teleoperation is divided into two main categories as unilateral and bilateral teleoperation according to information flow. This flow can be configured to be in either one direction (only from master to slave) or two directions (from master to slave and from slave to master). In unlimited-workspace teleoperation, one of the types of bilateral teleoperation, mobile robots are controlled by the operator and environmental information is transferred from the mobile robot to the operator. Teleoperated vehicles can be used in a variety of missions in air, on ground and in water. Therefore, different constructional types of robots can be designed for the different types of missions. This thesis aims to design and develop an unlimited-workspace teleoperation which includes an omnidirectional mobile robot as the slave system to be used in further researches. Initially, an omnidirectional mobile robot was manufactured and robot-operator interaction and efficient data transfer was provided with the established communication line. Wheel velocities were measured in real-time by Hall-effect sensors mounted on robot chassis to be integrated in controllers. A dynamic obstacle detection system, which is suitable for omnidirectional mobility, was developed and two obstacle avoidance algorithms (semi-autonomous and force reflecting) were created and tested. Distance information between the robot and the obstacles was collected by an array of sensors mounted on the robot. In the semi-autonomous teleoperation scenario, distance information is used to avoid obstacles autonomously and in the force-reflecting teleoperation scenario obstacles are informed to the user by sending back the artificially created forces acting on the slave robot. The test results indicate that obstacle avoidance performance of the developed vehicle with two algorithms is acceptable in all test scenarios. In addition, two control models were developed (kinematic and dynamic control) for the local controller of the slave robot. Also, kinematic controller was supported by gyroscope

    Vision Based Environment Mapping By Network Connected Multi-Robotic System.

    Get PDF
    The conventional environment mapping solutions are computationally very expensive and cannot effectively be used in multi-robotic environment, where small size robots with limited memory and processing resources are used. This study provides an environment mapping solution in which a group of small size robots extract simple distance vector features from the on-board camera images. The robots share these features between them using a wireless communication network setup in infrastructure mode. For mapping the distance vector features on a global map and to show a collective map building operation, the robots needed their accurate location and heading information. The robots location and heading information is computed using two ceiling mounted cameras, which collective localises the robots. Experimental results show that the proposed method provides the required environmental map which can facilitate the robot navigation operation in the environ- ment. It was observed that, using the proposed approach, the near by object boundaries can be mapped with higher accuracy comparatively the far lying objects

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance
    corecore