53 research outputs found

    An LBP based Iris Recognition System using Feed Forward Back Propagation Neural Network

    Get PDF
    An iris recognition system using LBP feature extraction technique with Feed Forward Back Propagation Neural Network is presented. For feature extraction from the eye images the iris localization and segmentation is very important task so in proposed work Hough circular transform (HCT) is used to segment the iris region from the eye mages. In this proposed work Local Binary Pattern (LBP) feature extraction technique is used to extract feature from the segmented iris region, then feed forward back propagation neural network is use as a classifier and in any classifier there to phases training and testing. The LBP feature extraction technique is a straightforward technique and every proficient feature operator which labels the pixels of an iris image by thresholding the neighbourhood of each pixel and considers the feature as a result in form of binary number. Due to its discriminative efficiency and computational simplicity the LBP feature extractor has become a popular approach in various recognition systems. This proposed method decreased the FAR as well as FRR, & has increases the system performance on the given dataset. The average accuracy of proposed iris recognition system is more than 97%

    An Overview of Advances of Pattern Recognition Systems in Computer Vision

    Get PDF
    26 pagesFirst of all, let's give a tentative answer to the following question: what is pattern recognition (PR)? Among all the possible existing answers, that which we consider being the best adapted to the situation and to the concern of this chapter is: "pattern recognition is the scientific discipline of machine learning (or artificial intelligence) that aims at classifying data (patterns) into a number of categories or classes". But what is a pattern? A pattern recognition system (PRS) is an automatic system that aims at classifying the input pattern into a specific class. It proceeds into two successive tasks: (1) the analysis (or description) that extracts the characteristics from the pattern being studied and (2) the classification (or recognition) that enables us to recognise an object (or a pattern) by using some characteristics derived from the first task

    Inertial Navigation and Mapping for Autonomous Vehicles

    Full text link

    Learning in Non-Cooperative Configurable Markov Decision Processes

    Get PDF
    The Configurable Markov Decision Process framework includes two entities: a Reinforcement Learning agent and a configurator that can modify some environmental parameters to improve the agent's performance. This presupposes that the two actors have the same reward functions. What if the configurator does not have the same intentions as the agent? This paper introduces the Non-Cooperative Configurable Markov Decision Process, a setting that allows having two (possibly different) reward functions for the configurator and the agent. Then, we consider an online learning problem, where the configurator has to find the best among a finite set of possible configurations. We propose two learning algorithms to minimize the configurator's expected regret, which exploits the problem's structure, depending on the agent's feedback. While a naive application of the UCB algorithm yields a regret that grows indefinitely over time, we show that our approach suffers only bounded regret. Furthermore, we empirically show the performance of our algorithm in simulated domains

    Investigation on the mobile robot navigation in an unknown environment

    Get PDF
    Mobile robots could be used to search, find, and relocate objects in many types of manufacturing operations and environments. In this scenario, the target objects might reside with equal probability at any location in the environment and, therefore, the robot must navigate and search the whole area autonomously, and be equipped with specific sensors to detect objects. Novel challenges exist in developing a control system, which helps a mobile robot achieve such tasks, including constructing enhanced systems for navigation, and vision-based object recognition. The latter is important for undertaking the exploration task that requires an optimal object recognition technique. In this thesis, these challenges, for an indoor environment, were divided into three sub-problems. In the first, the navigation task involved discovering an appropriate exploration path for the entire environment, with minimal sensing requirements. The Bug algorithm strategies were adapted for modelling the environment and implementing the exploration path. The second was a visual-search process, which consisted of employing appropriate image-processing techniques, and choosing a suitable viewpoint field for the camera. This study placed more emphasis on colour segmentation, template matching and Speeded-Up Robust Features (SURF) for object detection. The third problem was the relocating process, which involved using a robot’s gripper to grasp the detected, desired object and then move it to the assigned, final location. This also included approaching both the target and the delivery site, using a visual tracking technique. All codes were developed using C++ and C programming, and some libraries that included OpenCV and OpenSURF were utilized for image processing. Each control system function was tested both separately, and then in combination as a whole control program. The system performance was evaluated using two types of mobile robots: legged and wheeled. In this study, it was necessary to develop a wheeled search robot with a high performance processor. The experimental results demonstrated that the methodology used for the search robots was highly efficient provided the processor was adequate. It was concluded that it is possible to implement a navigation system within a minimum number of sensors if they are located and used effectively on the robot’s body. The main challenge within a visual-search process is that the environmental conditions are difficult to control, because the search robot executes its tasks in dynamic environments. The additional challenges of scaling these small robots up to useful industrial capabilities were also explored

    Tactile Sensing for Assistive Robotics

    Get PDF

    An Insect-Inspired Target Tracking Mechanism for Autonomous Vehicles

    Get PDF
    Target tracking is a complicated task from an engineering perspective, especially where targets are small and seen against complex natural environments. Due to the high demand for robust target tracking algorithms a great deal of research has focused on this area. However, most engineering solutions developed for this purpose are often unreliable in real world conditions or too computationally expensive to be used in real-time applications. While engineering methods try to solve the problem of target detection and tracking by using high resolution input images, fast processors, with typically computationally expensive methods, a quick glance at nature provides evidence that practical real world solutions for target tracking exist. Many animals track targets for predation, territorial or mating purposes and with millions of years of evolution behind them, it seems reasonable to assume that these solutions are highly efficient. For instance, despite their low resolution compound eyes and tiny brains, many flying insects have evolved superb abilities to track targets in visual clutter even in the presence of other distracting stimuli, such as swarms of prey and conspecifics. The accessibility of the dragonfly for stable electrophysiological recordings makes this insect an ideal and tractable model system for investigating the neuronal correlates for complex tasks such as target pursuit. Studies on dragonflies identified and characterized a set of neurons likely to mediate target detection and pursuit referred to as ‘small target motion detector’ (STMD) neurons. These neurons are selective for tiny targets, are velocity-tuned, contrast-sensitive and respond robustly to targets even against the motion of background. These neurons have shown several high-order properties which can contribute to the dragonfly’s ability to robustly pursue prey with over a 97% success rate. These include the recent electrophysiological observations of response ‘facilitation’ (a slow build-up of response to targets that move on long, continuous trajectories) and ‘selective attention’, a competitive mechanism that selects one target from alternatives. In this thesis, I adopted a bio-inspired approach to develop a solution for the problem of target tracking and pursuit. Directly inspired by recent physiological breakthroughs in understanding the insect brain, I developed a closed-loop target tracking system that uses an active saccadic gaze fixation strategy inspired by insect pursuit. First, I tested this model in virtual world simulations using MATLAB/Simulink. The results of these simulations show robust performance of this insect-inspired model, achieving high prey capture success even within complex background clutter, low contrast and high relative speed of pursued prey. Additionally, these results show that inclusion of facilitation not only substantially improves success for even short-duration pursuits, it also enhances the ability to ‘attend’ to one target in the presence of distracters. This inspect-inspired system has a relatively simple image processing strategy compared to state-of-the-art trackers developed recently for computer vision applications. Traditional machine vision approaches incorporate elaborations to handle challenges and non-idealities in the natural environments such as local flicker and illumination changes, and non-smooth and non-linear target trajectories. Therefore, the question arises as whether this insect inspired tracker can match their performance when given similar challenges? I investigated this question by testing both the efficacy and efficiency of this insect-inspired model in open-loop, using a widely-used set of videos recorded under natural conditions. I directly compared the performance of this model with several state-of-the-art engineering algorithms using the same hardware, software environment and stimuli. This insect-inspired model exhibits robust performance in tracking small moving targets even in very challenging natural scenarios, outperforming the best of the engineered approaches. Furthermore, it operates more efficiently compared to the other approaches, in some cases dramatically so. Computer vision literature traditionally test target tracking algorithms only in open-loop. However, one of the main purposes for developing these algorithms is implementation in real-time robotic applications. Therefore, it is still unclear how these algorithms might perform in closed-loop real-world applications where inclusion of sensors and actuators on a physical robot results in additional latency which can affect the stability of the feedback process. Additionally, studies show that animals interact with the target by changing eye or body movements, which then modulate the visual inputs underlying the detection and selection task (via closed-loop feedback). This active vision system may be a key to exploiting visual information by the simple insect brain for complex tasks such as target tracking. Therefore, I implemented this insect-inspired model along with insect active vision in a robotic platform. I tested this robotic implementation both in indoor and outdoor environments against different challenges which exist in real-world conditions such as vibration, illumination variation, and distracting stimuli. The experimental results show that the robotic implementation is capable of handling these challenges and robustly pursuing a target even in highly challenging scenarios.Thesis (Ph.D.) -- University of Adelaide, School of Mechanical Engineering, 201

    Bio-Inspired Robotics

    Get PDF
    Modern robotic technologies have enabled robots to operate in a variety of unstructured and dynamically-changing environments, in addition to traditional structured environments. Robots have, thus, become an important element in our everyday lives. One key approach to develop such intelligent and autonomous robots is to draw inspiration from biological systems. Biological structure, mechanisms, and underlying principles have the potential to provide new ideas to support the improvement of conventional robotic designs and control. Such biological principles usually originate from animal or even plant models, for robots, which can sense, think, walk, swim, crawl, jump or even fly. Thus, it is believed that these bio-inspired methods are becoming increasingly important in the face of complex applications. Bio-inspired robotics is leading to the study of innovative structures and computing with sensory–motor coordination and learning to achieve intelligence, flexibility, stability, and adaptation for emergent robotic applications, such as manipulation, learning, and control. This Special Issue invites original papers of innovative ideas and concepts, new discoveries and improvements, and novel applications and business models relevant to the selected topics of ``Bio-Inspired Robotics''. Bio-Inspired Robotics is a broad topic and an ongoing expanding field. This Special Issue collates 30 papers that address some of the important challenges and opportunities in this broad and expanding field

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    Get PDF
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
    • …
    corecore