329 research outputs found

    Radar Target Classification Technologies

    Get PDF

    Indoor Geo-location And Tracking Of Mobile Autonomous Robot

    Get PDF
    The field of robotics has always been one of fascination right from the day of Terminator. Even though we still do not have robots that can actually replicate human action and intelligence, progress is being made in the right direction. Robotic applications range from defense to civilian, in public safety and fire fighting. With the increase in urban-warfare robot tracking inside buildings and in cities form a very important application. The numerous applications range from munitions tracking to replacing soldiers for reconnaissance information. Fire fighters use robots for survey of the affected area. Tracking robots has been limited to the local area under consideration. Decision making is inhibited due to limited local knowledge and approximations have to be made. An effective decision making would involve tracking the robot in earth co-ordinates such as latitude and longitude. GPS signal provides us sufficient and reliable data for such decision making. The main drawback of using GPS is that it is unavailable indoors and also there is signal attenuation outdoors. Indoor geolocation forms the basis of tracking robots inside buildings and other places where GPS signals are unavailable. Indoor geolocation has traditionally been the field of wireless networks using techniques such as low frequency RF signals and ultra-wideband antennas. In this thesis we propose a novel method for achieving geolocation and enable tracking. Geolocation and tracking are achieved by a combination of Gyroscope and encoders together referred to as the Inertial Navigation System (INS). Gyroscopes have been widely used in aerospace applications for stabilizing aircrafts. In our case we use gyroscope as means of determining the heading of the robot. Further, commands can be sent to the robot when it is off balance or off-track. Sensors are inherently error prone; hence the process of geolocation is complicated and limited by the imperfect mathematical modeling of input noise. We make use of Kalman Filter for processing erroneous sensor data, as it provides us a robust and stable algorithm. The error characteristics of the sensors are input to the Kalman Filter and filtered data is obtained. We have performed a large set of experiments, both indoors and outdoors to test the reliability of the system. In outdoors we have used the GPS signal to aid the INS measurements. When indoors we utilize the last known position and extrapolate to obtain the GPS co-ordinates

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Context classification for service robots

    Get PDF
    This dissertation presents a solution for environment sensing using sensor fusion techniques and a context/environment classification of the surroundings in a service robot, so it could change his behavior according to the different rea-soning outputs. As an example, if a robot knows he is outdoors, in a field environment, there can be a sandy ground, in which it should slow down. Contrariwise in indoor environments, that situation is statistically unlikely to happen (sandy ground). This simple assumption denotes the importance of context-aware in automated guided vehicles

    CONFIDENCE-BASED DECISION-MAKING SUPPORT FOR MULTI-SENSOR SYSTEMS

    Get PDF
    We live in a world where computer systems are omnipresent and are connected to more and more sensors. Ranging from small individual electronic assistants like smartphones to complex autonomous robots, from personal wearable health devices to professional eHealth frameworks, all these systems use the sensors’ data in order to make appropriate decisions according to the context they measure. However, in addition to complete failures leading to the lack of data delivery, these sensors can also send bad data due to influences from the environment which can sometimes be hard to detect by the computer system when checking each sensor individually. The computer system should be able to use its set of sensors as a whole in order to mitigate the influence of malfunctioning sensors, to overcome the absence of data coming from broken sensors, and to handle possible conflicting information coming from several sensors. In this thesis, we propose a computational model based on a two layer software architecture to overcome this challenge. In a first layer, classification algorithms will check for malfunctioning sensors and attribute a confidence value to each sensor. In the second layer, a rule-based proactive engine will then build a representation of the context of the system and use it along some empirical knowledge about the weaknesses of the different sensors to further tweak this confidence value. Furthermore, the system will then check for conflicting data between sensors. This can be done by having several sensors that measure the same parameters or by having multiple sensors that can be used together to calculate an estimation of a parameter given by another sensor. A confidence value will be calculated for this estimation as well, based on the confidence values of the related sensors. The successive design refinement steps of our model are shown over the course of three experiments. The first two experiments, located in the eHealth domain, have been used to better identify the challenges of such multi-sensor systems, while the third experiment, which consists of a virtual robot simulation, acts as a proof of concept for the semi-generic model proposed in this thesis

    Combination of Evidence in Dempster-Shafer Theory

    Full text link

    Concurrent Cognitive Mapping and Localization Using Expectation Maximization

    Get PDF
    Robot mapping remains one of the most challenging problems in robot programming. Most successful methods use some form of occupancy grid for representing a mapped region. An occupancy grid is a two dimensional array in which the array cells represents (x,y) coordinates of a cartesian map. This approach becomes problematic in mapping large environments as the map quickly becomes too large for processing and storage. Rather than storing the map as an occupancy grid, our robot (equipped with ultrasonic sonars) views the world as a series of connected spaces. These spaces are initially mapped as an occupancy grid in a room-by-room fashion using a modified version of the Histogram In Motion Mapping (HIMM) algorithm extended in this thesis. As the robot leaves a space, denoted by passing through a doorway, it converts the grid to a polygonal representation using a novel edge detection technique. Then, it stores the polygonal representation as rooms and hallways in a set of Absolute Space Representations (ASRs) representing the space connections. Using this representation makes navigation and localization easier for the robot to process. The system also performs localization on the simplified cognitive version of the map using an iterative method of estimating the maximum likelihood of the robot\u27s correct position. This is accomplished using the Expectation Maximization algorithm. Treating vector directions from the polygonal map as a Gaussian distribution, the Expectation Maximization algorithm is applied, for the first time, to find the most probable correct pose while using a cognitive mapping approach

    Developing integrated data fusion algorithms for a portable cargo screening detection system

    Get PDF
    Towards having a one size fits all solution to cocaine detection at borders; this thesis proposes a systematic cocaine detection methodology that can use raw data output from a fibre optic sensor to produce a set of unique features whose decisions can be combined to lead to reliable output. This multidisciplinary research makes use of real data sourced from cocaine analyte detecting fibre optic sensor developed by one of the collaborators - City University, London. This research advocates a two-step approach: For the first step, the raw sensor data are collected and stored. Level one fusion i.e. analyses, pre-processing and feature extraction is performed at this stage. In step two, using experimentally pre-determined thresholds, each feature decides on detection of cocaine or otherwise with a corresponding posterior probability. High level sensor fusion is then performed on this output locally to combine these decisions and their probabilities at time intervals. Output from every time interval is stored in the database and used as prior data for the next time interval. The final output is a decision on detection of cocaine. The key contributions of this thesis includes investigating the use of data fusion techniques as a solution for overcoming challenges in the real time detection of cocaine using fibre optic sensor technology together with an innovative user interface design. A generalizable sensor fusion architecture is suggested and implemented using the Bayesian and Dempster-Shafer techniques. The results from implemented experiments show great promise with this architecture especially in overcoming sensor limitations. A 5-fold cross validation system using a 12 13 - 1 Neural Network was used in validating the feature selection process. This validation step yielded 89.5% and 10.5% true positive and false alarm rates with 0.8 correlation coefficient. Using the Bayesian Technique, it is possible to achieve 100% detection whilst the Dempster Shafer technique achieves a 95% detection using the same features as inputs to the DF system

    Task-Driven Integrity Assessment and Control for Vehicular Hybrid Localization Systems

    Get PDF
    Throughout the last decade, vehicle localization has been attracting significant attention in a wide range of applications, including Navigation Systems, Road Tolling, Smart Parking, and Collision Avoidance. To deliver on their requirements, these applications need specific localization accuracy. However, current localization techniques lack the required accuracy, especially for mission critical applications. Although various approaches for improving localization accuracy have been reported in the literature, there is still a need for more efficient and more effective measures that can ascribe some level of accuracy to the localization process. These measures will enable localization systems to manage the localization process and resources so as to achieve the highest accuracy possible, and to mitigate the impact of inadequate accuracy on the target application. In this thesis, a framework for fusing different localization techniques is introduced in order to estimate the location of a vehicle along with location integrity assessment that captures the impact of the measurement conditions on the localization quality. Knowledge about estimate integrity allows the system to plan the use of its localization resources so as to match the target accuracy of the application. The framework introduced provides the tools that would allow for modeling the impact of the operation conditions on estimate accuracy and integrity, as such it enables more robust system performance in three steps. First, localization system parameters are utilized to contrive a feature space that constitutes probable accuracy classes. Due to the strong overlap among accuracy classes in the feature space, a hierarchical classification strategy is developed to address the class ambiguity problem via the class unfolding approach (HCCU). HCCU strategy is proven to be superior with respect to other hierarchical configuration. Furthermore, a Context Based Accuracy Classification (CBAC) algorithm is introduced to enhance the performance of the classification process. In this algorithm, knowledge about the surrounding environment is utilized to optimize classification performance as a function of the observation conditions. Second, a task-driven integrity (TDI) model is developed to enable the applications modules to be aware of the trust level of the localization output. Typically, this trust level functions in the measurement conditions; therefore, the TDI model monitors specific parameter(s) in the localization technique and, accordingly, infers the impact of the change in the environmental conditions on the quality of the localization process. A generalized TDI solution is also introduced to handle the cases where sufficient information about the sensing parameters is unavailable. Finally, the produce of the employed localization techniques (i.e., location estimates, accuracy, and integrity level assessment) needs to be fused. Nevertheless, these techniques are hybrid and their pieces of information are conflicting in many situations. Therefore, a novel evidence structure model called Spatial Evidence Structure Model (SESM) is developed and used in constructing a frame of discernment comprising discretized spatial data. SESM-based fusion paradigms are capable of performing a fusion process using the information provided by the techniques employed. Both the location estimate accuracy and aggregated integrity resultant from the fusion process demonstrate superiority over the employing localization techniques. Furthermore, a context aware task-driven resource allocation mechanism is developed to manage the fusion process. The main objective of this mechanism is to optimize the usage of system resources and achieve a task-driven performance. Extensive experimental work is conducted on real-life and simulated data to validate models developed in this thesis. It is evident from the experimental results that task-driven integrity assessment and control is applicable and effective on hybrid localization systems

    A Decision-Rule Topological Map-Matching Algorithm with Multiple Spatial Data

    Get PDF
    corecore