878 research outputs found

    Multisensor data fusion for joint people tracking and identification with a service robot

    Get PDF
    Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    READUP BUILDUP. Thync - instant α-readings

    Get PDF

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    A multi-modal person perception framework for socially interactive mobile service robots

    Get PDF
    In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments

    Human activity recognition using multisensor data fusion based on Reservoir Computing

    Get PDF
    Activity recognition plays a key role in providing activity assistance and care for users in smart homes. In this work, we present an activity recognition system that classifies in the near real-time a set of common daily activities exploiting both the data sampled by sensors embedded in a smartphone carried out by the user and the reciprocal Received Signal Strength (RSS) values coming from worn wireless sensor devices and from sensors deployed in the environment. In order to achieve an effective and responsive classification, a decision tree based on multisensor data-stream is applied fusing data coming from embedded sensors on the smartphone and environmental sensors before processing the RSS stream. To this end, we model the RSS stream, obtained from a Wireless Sensor Network (WSN), using Recurrent Neural Networks (RNNs) implemented as efficient Echo State Networks (ESNs), within the Reservoir Computing (RC) paradigm. We targeted the system for the EvAAL scenario, an international competition that aims at establishing benchmarks and evaluation metrics for comparing Ambient Assisted Living (AAL) solutions. In this paper, the performance of the proposed activity recognition system is assessed on a purposely collected real-world dataset, taking also into account a competitive neural network approach for performance comparison. Our results show that, with an appropriate configuration of the information fusion chain, the proposed system reaches a very good accuracy with a low deployment cost

    Developing integrated data fusion algorithms for a portable cargo screening detection system

    Get PDF
    Towards having a one size fits all solution to cocaine detection at borders; this thesis proposes a systematic cocaine detection methodology that can use raw data output from a fibre optic sensor to produce a set of unique features whose decisions can be combined to lead to reliable output. This multidisciplinary research makes use of real data sourced from cocaine analyte detecting fibre optic sensor developed by one of the collaborators - City University, London. This research advocates a two-step approach: For the first step, the raw sensor data are collected and stored. Level one fusion i.e. analyses, pre-processing and feature extraction is performed at this stage. In step two, using experimentally pre-determined thresholds, each feature decides on detection of cocaine or otherwise with a corresponding posterior probability. High level sensor fusion is then performed on this output locally to combine these decisions and their probabilities at time intervals. Output from every time interval is stored in the database and used as prior data for the next time interval. The final output is a decision on detection of cocaine. The key contributions of this thesis includes investigating the use of data fusion techniques as a solution for overcoming challenges in the real time detection of cocaine using fibre optic sensor technology together with an innovative user interface design. A generalizable sensor fusion architecture is suggested and implemented using the Bayesian and Dempster-Shafer techniques. The results from implemented experiments show great promise with this architecture especially in overcoming sensor limitations. A 5-fold cross validation system using a 12 13 - 1 Neural Network was used in validating the feature selection process. This validation step yielded 89.5% and 10.5% true positive and false alarm rates with 0.8 correlation coefficient. Using the Bayesian Technique, it is possible to achieve 100% detection whilst the Dempster Shafer technique achieves a 95% detection using the same features as inputs to the DF system

    Design and Development of Sensor Integrated Robotic Hand

    Get PDF
    Most of the automated systems using robots as agents do use few sensors according to the need. However, there are situations where the tasks carried out by the end-effector, or for that matter by the robot hand needs multiple sensors. The hand, to make the best use of these sensors, and behave autonomously, requires a set of appropriate types of sensors which could be integrated in proper manners. The present research work aims at developing a sensor integrated robot hand that can collect information related to the assigned tasks, assimilate there correctly and then do task action as appropriate. The process of development involves selection of sensors of right types and of right specification, locating then at proper places in the hand, checking their functionality individually and calibrating them for the envisaged process. Since the sensors need to be integrated so that they perform in the desired manner collectively, an integration platform is created using NI PXIe-1082. A set of algorithm is developed for achieving the integrated model. The entire process is first modelled and simulated off line for possible modification in order to ensure that all the sensors do contribute towards the autonomy of the hand for desired activity. This work also involves design of a two-fingered gripper. The design is made in such a way that it is capable of carrying out the desired tasks and can accommodate all the sensors within its fold. The developed sensor integrated hand has been put to work and its performance test has been carried out. This hand can be very useful for part assembly work in industries for any shape of part with a limit on the size of the part in mind. The broad aim is to design, model simulate and develop an advanced robotic hand. Sensors for pick up contacts pressure, force, torque, position, surface profile shape using suitable sensing elements in a robot hand are to be introduced. The hand is a complex structure with large number of degrees of freedom and has multiple sensing capabilities apart from the associated sensing assistance from other organs. The present work is envisaged to add multiple sensors to a two-fingered robotic hand having motion capabilities and constraints similar to the human hand. There has been a good amount of research and development in this field during the last two decades a lot remains to be explored and achieved. The objective of the proposed work is to design, simulate and develop a sensor integrated robotic hand. Its potential applications can be proposed for industrial environments and in healthcare field. The industrial applications include electronic assembly tasks, lighter inspection tasks, etc. Application in healthcare could be in the areas of rehabilitation and assistive techniques. The work also aims to establish the requirement of the robotic hand for the target application areas, to identify the suitable kinds and model of sensors that can be integrated on hand control system. Functioning of motors in the robotic hand and integration of appropriate sensors for the desired motion is explained for the control of the various elements of the hand. Additional sensors, capable of collecting external information and information about the object for manipulation is explored. Processes are designed using various software and hardware tools such as mathematical computation MATLAB, OpenCV library and LabVIEW 2013 DAQ system as applicable, validated theoretically and finally implemented to develop an intelligent robotic hand. The multiple smart sensors are installed on a standard six degree-of-freedom industrial robot KAWASAKI RS06L articulated manipulator, with the two-finger pneumatic SHUNK robotic hand or designed prototype and robot control programs are integrated in such a manner that allows easy application of grasping in an industrial pick-and-place operation where the characteristics of the object can vary or are unknown. The effectiveness of the actual recommended structure is usually proven simply by experiments using calibration involving sensors and manipulator. The dissertation concludes with a summary of the contribution and the scope of further work
    corecore