23 research outputs found

    Kinect Enabled Monte Carlo Localisation for a Robotic Wheelchair

    Get PDF
    Proximity sensors and 2D vision methods have shown to work robustly in particle filter-based Monte Carlo Localisation (MCL). It would be interesting however to examine whether modern 3D vision sensors would be equally efficient for localising a robotic wheelchair with MCL. In this work, we introduce a visual Region Locator Descriptor, acquired from a 3D map using the Kinect sensor to conduct localisation. The descriptor segments the Kinect’s depth map into a grid of 36 regions, where the depth of each column-cell is being used as a distance range for the measurement model of a particle filter. The experimental work concentrated on a comparison of three different localization cases. (a) an odometry model without MCL, (b) with MCL and sonar sensors only, (c) with MCL and the Kinect sensor only. The comparative study demonstrated the efficiency of a modern 3D depth sensor, such as the Kinect, which can be used reliably for wheelchair localisation

    Gesture Based Navigation and Localization of a Smart Wheelchair using Fiducial Markers

    Get PDF
    With the rise in aging population, about 6.8 million American residents are depen- dent on mobility devices for their day to day activity. More than 40% of these users have di?culty in moving the mobility device on their own. These numbers serve as a motivation on developing a system than can help in manipulation with simple muscle activity and localize the mobility device in the user\u27s home in case of medical emergencies. This research is aimed at creating a user interface of Elec- tromyographic Sensor, attached to the forearm, incorporated with present smart wheelchairs and a simple localization technique using ducial markers. The main outcome of the research is a simulator of the smart wheelchair to analyze the results of my research

    User modelling for robotic companions using stochastic context-free grammars

    Get PDF
    Creating models about others is a sophisticated human ability that robotic companions need to develop in order to have successful interactions. This thesis proposes user modelling frameworks to personalise the interaction between a robot and its user and devises novel scenarios where robotic companions may apply these user modelling techniques. We tackle the creation of user models in a hierarchical manner, using a streamlined version of the Hierarchical Attentive Multiple-Models for Execution and Recognition (HAMMER) architecture to detect low-level user actions and taking advantage of Stochastic Context-Free Grammars (SCFGs) to instantiate higher-level models which recognise uncertain and recursive sequences of low-level actions. We discuss a couple of distinct scenarios for robotic companions: a humanoid sidekick for power-wheelchair users and a companion of hospital patients. Next, we address the limitations of the previous scenarios by applying our user modelling techniques and designing two further scenarios that fully take advantage of the user model. These scenarios are: a wheelchair driving tutor which models the user abilities, and the musical collaborator which learns the preferences of its users. The methodology produced interesting results in all scenarios: users preferred the actual robot over a simulator as a wheelchair sidekick. Hospital patients rated positively their interactions with the companion independently of their age. Moreover, most users agreed that the music collaborator had become a better accompanist with our framework. Finally, we observed that users' driving performance improved when the robotic tutor instructed them to repeat a task. As our workforce ages and the care requirements in our society grow, robots will need to play a role in helping us lead better lives. This thesis shows that, through the use of SCFGs, adaptive user models may be generated which then can be used by robots to assist their users.Open Acces

    Uncertainty and social considerations for mobile assistive robot navigation

    Get PDF
    An increased interest in mobile robots has been seen over the past years. The wide range of possible applications, from vacuum cleaners to assistant robots, makes such robots an interesting solution to many everyday problems. A key requirement for the mass deployment of such robots is to ensure they can safely navigate around our daily living environments. A robot colliding with or bumping into a person may be, in some contexts, unacceptable. For example, if a robot working around elderly people collides with one of them, it may cause serious injuries. This thesis explores four major components required for effective robot navigation: sensing the static environment, detection and tracking of moving people, obstacle and people avoidance with uncertainty measurement, and basic social navigation considerations. First, to guarantee adherence to basic safety constraints, sensors and algorithms required to measure the complex structure of our daily living environments are explored. Not only do the static components of the environment have to be measured, but so do any people present. A people detection and tracking algorithm, aimed for a crowded environment is proposed, thus enhancing the robot's perception capabilities. Our daily living environments present many inherent sources of uncertainty for robots, one of them arising due to the robot's inability to know people's intentions as they move. To solve this problem, a motion model that assumes unknown long-term intentions is proposed. This is used in conjunction with a novel uncertainty aware local planner to create feasible trajectories. In social situations, the presence of groups of people cannot be neglected when navigating. To avoid the robot interrupting groups of people, it first needs to be able to detect such groups. A group detector is proposed which relies on a set of gaze- and geometric-based features. Avoiding group disruption is finally incorporated into the navigation algorithm by means of taking into account the probability of disrupting a group's activities. The effectiveness of the four different components is evaluated using real world and simulated data, demonstrating the benefits for mobile robot navigation.Open Acces

    Geometric, Semantic, and System-Level Scene Understanding for Improved Construction and Operation of the Built Environment

    Full text link
    Recent advances in robotics and enabling fields such as computer vision, deep learning, and low-latency data passing offer significant potential for developing efficient and low-cost solutions for improved construction and operation of the built environment. Examples of such potential solutions include the introduction of automation in environment monitoring, infrastructure inspections, asset management, and building performance analyses. In an effort to advance the fundamental computational building blocks for such applications, this dissertation explored three categories of scene understanding capabilities: 1) Localization and mapping for geometric scene understanding that enables a mobile agent (e.g., robot) to locate itself in an environment, map the geometry of the environment, and navigate through it; 2) Object recognition for semantic scene understanding that allows for automatic asset information extraction for asset tracking and resource management; 3) Distributed coupling analysis for system-level scene understanding that allows for discovery of interdependencies between different built-environment processes for system-level performance analyses and response-planning. First, this dissertation advanced Simultaneous Localization and Mapping (SLAM) techniques for convenient and low-cost locating capabilities compared with previous work. To provide a versatile Real-Time Location System (RTLS), an occupancy grid mapping enhanced visual SLAM (vSLAM) was developed to support path planning and continuous navigation that cannot be implemented directly on vSLAM’s original feature map. The system’s localization accuracy was experimentally evaluated with a set of visual landmarks. The achieved marker position measurement accuracy ranges from 0.039m to 0.186m, proving the method’s feasibility and applicability in providing real-time localization for a wide range of applications. In addition, a Self-Adaptive Feature Transform (SAFT) was proposed to improve such an RTLS’s robustness in challenging environments. As an example implementation, the SAFT descriptor was implemented with a learning-based descriptor and integrated into a vSLAM for experimentation. The evaluation results on two public datasets proved the feasibility and effectiveness of SAFT in improving the matching performance of learning-based descriptors for locating applications. Second, this dissertation explored vision-based 1D barcode marker extraction for automated object recognition and asset tracking that is more convenient and efficient than the traditional methods of using barcode or asset scanners. As an example application in inventory management, a 1D barcode extraction framework was designed to extract 1D barcodes from video scan of a built environment. The performance of the framework was evaluated with video scan data collected from an active logistics warehouse near Detroit Metropolitan Airport (DTW), demonstrating its applicability in automating inventory tracking and management applications. Finally, this dissertation explored distributed coupling analysis for understanding interdependencies between processes affecting the built environment and its occupants, allowing for accurate performance and response analyses compared with previous research. In this research, a Lightweight Communications and Marshalling (LCM)-based distributed coupling analysis framework and a message wrapper were designed. This proposed framework and message wrapper were tested with analysis models from wind engineering and structural engineering, where they demonstrated the abilities to link analysis models from different domains and reveal key interdependencies between the involved built-environment processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155042/1/lichaox_1.pd

    Navegação de robô móvel em ambiente com humanos

    Get PDF
    Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia MecânicaA robótica é um tema que desde a sua introdução tem capturado o interesse e a imaginação das pessoas. A sua aplicação menos fantasiosa e mais realista pode ser vista na indústria. No entanto, um dos maiores desafios encontra-se em criar um robô com capacidade de navegação autónoma, incluindo o reconhecimento e desvio de pessoas, em ambientes estruturados. A robótica móvel baseada no controlo por visão tem-se desenvolvido nos últimos 30 anos. Ao longo deste período, diversos modelos matemáticos foram apresentados com o objectivo de resolver os múltiplos desafios que o controlo de uma unidade robótica autónoma apresenta. A visão robótica, tal como no ser Humano, permite à unidade uma compreensão do mundo onde se insere. No caso de o ambiente ser partilhado com pessoas, a necessidade de interpretar e separar uma pessoa de um objecto é fundamental, para a segurança física e psíquica do ser Humano. Esta dissertação propõe o estudo e desenvolvimento de um robô móvel autónomo apto para navegar em espaços com pessoas. Os trabalhos realizados focaram-se na localização no espaço, reconhecimento de pessoas e planeamento e controlo de trajectórias. Com este conjunto de soluções pretende-se contribuir com um sistema autónomo robusto capaz de interagir em segurança em ambientes de trabalho partilhados com pessoas, com elevado grau de confiança de modo a que seja possível implantar num ambiente industrial.Abstract: Robotics is a subject known to capture the imagination of people. It can be seen in a realistic fashion in many fields of modern industry. Current robotics challenges can be found in the deployment of autonomous mobile robotswhen operating in Human shared environments. Vision based mobile robotics is a subject being studied for over 30 years. Robotic Vision, supports the perception of the surrounding world, including the discrimination of objects from people, therefore itis paramount to gurantee the physical and psychological safety of Humans. This work presents a study and development of a mobile robot with the ability to detect people, and to modify the navigation path accordingly. This thesis aims to provide a robust autonomous robot system that can safely operate in Human shared environments, thus being suitable to areal industrial situation

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    A Framework for Learning by Demonstration in Multi-teacher Multi-robot Scenarios

    No full text
    As robots become more accessible to humans, more intuitive and human-friendly ways of programming them with interactive and group-aware behaviours are needed. This thesis addresses the gap between Learning by Demonstration and Multi-robot systems. In particular, this thesis tackles the fundamental problem of learning multi-robot cooperative behaviour from concurrent multi-teacher demonstrations, problem which had not been addressed prior to this work. The core contribution of this thesis is the design and implementation of a novel, multi- layered framework for multi-robot learning from simultaneous demonstrations, capable of deriving control policies at two different levels of abstraction. The lower level learns models of joint-actions at trajectory level, adapting such models to new scenarios via feature mapping. The higher level extracts the structure of cooperative tasks at symbolic level, generating a sequence of robot actions composing multi-robot plans. To the best of the author's knowledge, the proposed framework is the first Learning by Demonstration system to enable multiple human demonstrators to simultaneously teach group behaviour to multiple robots learners. A series of experimental tests were conducted using real robots in a real human workspace environment. The results obtained from a comprehensive comparison confirm the appli- cability of the joint-action model adaptation method utilised. What is more, the results of several trials provide evidence that the proposed framework effectively extracts rea- sonable multi-robot plans from demonstrations. In addition, a case study of the impact of human communication when using the proposed framework was conducted, suggesting no evidence that communication affects the time to completion of a task, but may have a positive effect on the extraction multi-robot plans. Furthermore, a multifaceted user study was conducted to analyse the aspects of user workload and focus of attention, as well as to evaluate the usability of the teleoperation system, highlighting which parts were necessary to be improved

    A Holistic Approach to Energy Harvesting for Indoor Robots:Theoretical Framework and Experimental Validations

    Get PDF
    Service robotics is a fast expanding market. Inside households, domestic robots can now accomplish numerous tasks, such as floor cleaning, surveillance, or remote presence. Their sales have considerably increased over the past years. Whereas 1.05 million domestic service robots were reportedly sold in 2009, at least 2.7 million units were sold in 2013. Consequently, this growth gives rise to an increase of the energy needs to power such a large and growing fleet of robots. However, the unique properties of mobile robots open some new fields of research. We must find technologies that are suitable for decreasing the energy requirements and thus further advance towards a sustainable development. This thesis tackles two fundamental goals based on a holistic approach of the global problem. The first goal is to reduce the energy needs by identifying key technologies in making energy-efficient robots. The second goal is to leverage innovative indoor energy sources to increase the ratio of renewable energies scavenged from the environment. To achieve our first goal, new energy-wise metrics are applied to real robotic hardware. This gives us the means to assess the impact of some technologies on the overall energy balance. First, we analysed seven robotic vacuum cleaners from a representative sample of the market that encompasses a wide variety of technologies. Simultaneous Localisation and Mapping (SLAM) was identified as a key technology to reduce energy needs when carrying out such tasks. Even if the instantaneous power is slightly increased, the completion time of the task is greatly reduced. We also analysed the needed sensors to achieve SLAM, as they are largely diversified. This work tested three sensors using three different technologies. We identified several important metrics. As of our second goal, potential energy sources are compared to the needs of an indoor robot. The sunshine coming through a building's apertures is identified as a promising source of renewable power. Numerical simulations showed how a mobile robot is mandatory to take full advantage of this previously unseen situation, as well as the influence of the geometric parameters on the yearly energy income under ideal sunny conditions. When considering a real system, the major difficulty to overcome is the tracking of the sunbeam along the day. The proposed algorithm uses a hybrid method. A high-level cognitive approach is responsible for the initial placement. Following realignments during the day are performed by a low-level reactive behaviour. A solar harvesting module was developed for our research robot. The tests conducted inside a controlled environment demonstrate the feasibility of this concept and the good performances of the aforementioned algorithm. Based on a realistic scenario and weather conditions, we computed that between 1 and 14 days of recharge could be necessary for a single cleaning task. In the future, our innovative technology could greatly lower the energy needs of service robots. However, it is not completely possible to abandon the recharge station due to occasional bad weather. The acceptance of this technology inside the user's home ecosystem remains to be studied

    Robot Games for Elderly:A Case-Based Approach

    Get PDF
    corecore