255 research outputs found

    A survey on acoustic positioning systems for location-based services

    Get PDF
    Positioning systems have become increasingly popular in the last decade for location-based services, such as navigation, and asset tracking and management. As opposed to outdoor positioning, where the global navigation satellite system became the standard technology, there is no consensus yet for indoor environments despite the availability of different technologies, such as radio frequency, magnetic field, visual light communications, or acoustics. Within these options, acoustics emerged as a promising alternative to obtain high-accuracy low-cost systems. Nevertheless, acoustic signals have to face very demanding propagation conditions, particularly in terms of multipath and Doppler effect. Therefore, even if many acoustic positioning systems have been proposed in the last decades, it remains an active and challenging topic. This article surveys the developed prototypes and commercial systems that have been presented since they first appeared around the 1980s to 2022. We classify these systems into different groups depending on the observable that they use to calculate the user position, such as the time-of-flight, the received signal strength, or the acoustic spectrum. Furthermore, we summarize the main properties of these systems in terms of accuracy, coverage area, and update rate, among others. Finally, we evaluate the limitations of these groups based on the link budget approach, which gives an overview of the system's coverage from parameters such as source and noise level, detection threshold, attenuation, and processing gain.Agencia Estatal de InvestigaciónResearch Council of Norwa

    Multimodal machine learning for intelligent mobility

    Get PDF
    Scientific problems are solved by finding the optimal solution for a specific task. Some problems can be solved analytically while other problems are solved using data driven methods. The use of digital technologies to improve the transportation of people and goods, which is referred to as intelligent mobility, is one of the principal beneficiaries of data driven solutions. Autonomous vehicles are at the heart of the developments that propel Intelligent Mobility. Due to the high dimensionality and complexities involved in real-world environments, it needs to become commonplace for intelligent mobility to use data-driven solutions. As it is near impossible to program decision making logic for every eventuality manually. While recent developments of data-driven solutions such as deep learning facilitate machines to learn effectively from large datasets, the application of techniques within safety-critical systems such as driverless cars remain scarce.Autonomous vehicles need to be able to make context-driven decisions autonomously in different environments in which they operate. The recent literature on driverless vehicle research is heavily focused only on road or highway environments but have discounted pedestrianized areas and indoor environments. These unstructured environments tend to have more clutter and change rapidly over time. Therefore, for intelligent mobility to make a significant impact on human life, it is vital to extend the application beyond the structured environments. To further advance intelligent mobility, researchers need to take cues from multiple sensor streams, and multiple machine learning algorithms so that decisions can be robust and reliable. Only then will machines indeed be able to operate in unstructured and dynamic environments safely. Towards addressing these limitations, this thesis investigates data driven solutions towards crucial building blocks in intelligent mobility. Specifically, the thesis investigates multimodal sensor data fusion, machine learning, multimodal deep representation learning and its application of intelligent mobility. This work demonstrates that mobile robots can use multimodal machine learning to derive driver policy and therefore make autonomous decisions.To facilitate autonomous decisions necessary to derive safe driving algorithms, we present an algorithm for free space detection and human activity recognition. Driving these decision-making algorithms are specific datasets collected throughout this study. They include the Loughborough London Autonomous Vehicle dataset, and the Loughborough London Human Activity Recognition dataset. The datasets were collected using an autonomous platform design and developed in house as part of this research activity. The proposed framework for Free-Space Detection is based on an active learning paradigm that leverages the relative uncertainty of multimodal sensor data streams (ultrasound and camera). It utilizes an online learning methodology to continuously update the learnt model whenever the vehicle experiences new environments. The proposed Free Space Detection algorithm enables an autonomous vehicle to self-learn, evolve and adapt to new environments never encountered before. The results illustrate that online learning mechanism is superior to one-off training of deep neural networks that require large datasets to generalize to unfamiliar surroundings. The thesis takes the view that human should be at the centre of any technological development related to artificial intelligence. It is imperative within the spectrum of intelligent mobility where an autonomous vehicle should be aware of what humans are doing in its vicinity. Towards improving the robustness of human activity recognition, this thesis proposes a novel algorithm that classifies point-cloud data originated from Light Detection and Ranging sensors. The proposed algorithm leverages multimodality by using the camera data to identify humans and segment the region of interest in point cloud data. The corresponding 3-dimensional data was converted to a Fisher Vector Representation before being classified by a deep Convolutional Neural Network. The proposed algorithm classifies the indoor activities performed by a human subject with an average precision of 90.3%. When compared to an alternative point cloud classifier, PointNet[1], [2], the proposed framework out preformed on all classes. The developed autonomous testbed for data collection and algorithm validation, as well as the multimodal data-driven solutions for driverless cars, is the major contributions of this thesis. It is anticipated that these results and the testbed will have significant implications on the future of intelligent mobility by amplifying the developments of intelligent driverless vehicles.</div

    Sensor-based Collision Avoidance System for the Walking Machine ALDURO

    Get PDF
    This work presents a sensor system develop for the robot ALDURO (Antropomorphically Legged and Wheeled Duisburg Robot), in order to allow it to detect and avoid obstacles when moving in unstructured terrains. The robot is a large-scale hydraulically driven 4-legged walking-machine, developed at the Duisburg-Essen University, with 16 degrees of freedom at each leg and will be steered by an operator sitting in a cab on the robot body. The Cartesian operator instructions are processed by a control computer, which converts them into appropriate autonomous leg movements, what makes necessary that the robot automatically recognizes the obstacles (rock, trunks, holes, etc.) on its way, locates and avoids them. A system based on ultra-sound sensors was developed to carry this task on, but there are intrinsic problems with such sensors, concerning to their poor angular precision. To overcome that, a fuzzy model of the used ultra-sound sensor, based on the characteristics of the real one, was developed to include the uncertainties about the measures. A posterior fuzzy inference builds from the measured data a map of the robot’s surroundings, to be used as input to the navigation system. This whole sensor system was implemented at a test stand, where a real size leg of the robot is fully functional. The sensors are assembled in an I2C net, which uses a micro-controller as interface to the main controller (a personal computer). That enables to relieve the main controller of some data processing, which is carried by the microcontroller on. The sensor system was tested together with the fuzzy data inference, and different arrangements to the sensors and settings of the inference system were tried, in order to achieve a satisfactory result

    BIO-INSPIRED SONAR IN COMPLEX ENVIRONMENTS: ATTENTIVE TRACKING AND VIEW RECOGNITION

    Get PDF
    Bats are known for their unique ability to sense the world through echolocation. This allows them to perceive the world in a way that few animals do, but not without some difficulties. This dissertation explores two such tasks using a bio-inspired sonar system: tracking a target object in cluttered environments, and echo view recognition. The use of echolocation for navigating in dense, cluttered environments can be a challenge due to the need for rapid sampling of nearby objects in the face of delayed echoes from distant objects. If long-delay echoes from a distant object are received after the next pulse is sent out, these “aliased” echoes appear as close-range phantom objects. This dissertation presents three reactive strategies for a high pulse-rate sonar system to combat aliased echoes: (1) changing the interpulse interval to move the aliased echoes away in time from the tracked target, (2) changing positions to create a geometry without aliasing, and (3) a phase-based, transmission beam-shaping strategy to illuminate the target and not the aliasing object. While this task relates to immediate sensing needs and lower level motor loops, view recognition is involved in higher level navigation and planning. Neurons in the mammalian brain (specifically in the hippocampus formation) named “place cells” are thought to reflect this recognition of place and are involved in implementing a spatial map that can be used for path planning and memory recall. We propose hypothetical “echo view cells” that could contribute (along with odometry) to the creation of place cell representations actually observed in bats. We strive to recognize views over extended regions that are many body lengths in size, reducing the number of places to be remembered for a map. We have successfully demonstrated some of this spatial invariance by training feed-forward neural networks (traditional neural networks and spiking neural networks) to recognize 66 distinct places in a laboratory environment over a limited range of translations and rotations. We further show how the echo view cells respond in between known places and how the population of cell outputs can be combined over time for continuity

    Towards a bionic bat: A biomimetic investigation of active sensing, Doppler-shift estimation, and ear morphology design for mobile robots.

    Get PDF
    Institute of Perception, Action and BehaviourSo-called CF-FM bats are highly mobile creatures who emit long calls in which much of the energy is concentrated in a single frequency. These bats face sensor interpretation problems very similar to those of mobile robots provided with ultrasonic sensors, while navigating in cluttered environments. This dissertation presents biologically inspired engineering on the use of narrowband Sonar in mobile robotics. It replicates, using robotics as a modelling medium, how CF-FM bats process and use the constant frequency part of their emitted call for several tasks, aiming to improve the design and use of narrowband ultrasonic sensors for mobile robot navigation. The experimental platform for the work is RoBat, the biomimetic sonarhead designed by Peremans and Hallam, mounted on a commercial mobile platform as part of the work reported in this dissertation. System integration, including signal processing capabilities inspired by the bat’s auditory system and closed loop control of both sonarhead and mobile base movements, was designed and implemented. The result is a versatile tool for studying the relationship between environmental features, their acoustic correlates and the cues computable from them, in the context of both static, and dynamic real-time closed loop, behaviour. Two models of the signal processing performed by the bat’s cochlea were implemented, based on sets of bandpass filters followed by full-wave rectification and low-pass filtering. One filterbank uses Butterworth filters whose centre frequencies vary linearly across the set. The alternative filterbank uses gammatone filters, with centre frequencies varying non-linearly across the set. Two methods of estimating Doppler-shift from the return echoes after cochlear signal processing were implemented. The first was a simple energy-weighted average of filter centre frequencies. The second was a novel neural network-based technique. Each method was tested with each of the cochlear models, and evaluated in the context of several dynamic tasks in which RoBat was moved at different velocities towards stationary echo sources such as walls and posts. Overall, the performance of the linear filterbank was more consistent than the gammatone. The same applies to the ANN, with consistently better noise performance than the weighted average. The effect of multiple reflectors contained in a single echo was also analysed in terms of error in Doppler-shift estimation assuming a single wider reflector. Inspired by the Doppler-shift compensation and obstacle avoidance behaviours found in CF-FM bats, a Doppler-based controller suitable for collision detection and convoy navigation in robots was devised and implemented in RoBat. The performance of the controller is satisfactory despite low Doppler-shift resolution caused by lower velocity of the robot when compared to real bats. Barshan’s and Kuc’s 2D object localisation method was implemented and adapted to the geometry of RoBat’s sonarhead. Different TOF estimation methods were tested, the parabola fitting being the most accurate. Arc scanning, the ear movement technique to recover elevation cues proposed by Walker, and tested in simulation by her, Peremans and Hallam, was here implemented on RoBat, and integrated with Barshan’s and Kuc’s method in a preliminary narrowband 3D tracker. Finally, joint work with Kim, K¨ampchen and Hallam on designing optimal reflector surfaces inspired by the CF-FM bat’s large pinnae is presented. Genetic algorithms are used for improving the current echolocating capabilities of the sonarhead for both arc scanning and IID behaviours. Multiple reflectors around the transducer using a simple ray light-like model of sound propagation are evolved. Results show phase cancellation problems and the need of a more complete model of wave propagation. Inspired by a physical model of sound diffraction and reflections in the human concha a new model is devised and used to evolve pinnae surfaces made of finite elements. Some interesting paraboloid shapes are obtained, improving performance significantly with respect to the bare transducer

    An Incremental Navigation Localization Methodology for Application to Semi-Autonomous Mobile Robotic Platforms to Assist Individuals Having Severe Motor Disabilities.

    Get PDF
    In the present work, the author explores the issues surrounding the design and development of an intelligent wheelchair platform incorporating the semi-autonomous system paradigm, to meet the needs of individuals with severe motor disabilities. The author presents a discussion of the problems of navigation that must be solved before any system of this type can be instantiated, and enumerates the general design issues that must be addressed by the designers of systems of this type. This discussion includes reviews of various methodologies that have been proposed as solutions to the problems considered. Next, the author introduces a new navigation method, called Incremental Signature Recognition (ISR), for use by semi-autonomous systems in structured environments. This method is based on the recognition, recording, and tracking of environmental discontinuities: sensor reported anomalies in measured environmental parameters. The author then proposes a robust, redundant, dynamic, self-diagnosing sensing methodology for detecting and compensating for hidden failures of single sensors and sensor idiosyncrasies. This technique is optimized for the detection of spatial discontinuity anomalies. Finally, the author gives details of an effort to realize a prototype ISR based system, along with insights into the various implementation choices made

    Autonomous wheelchair with a smart driving mode and a Wi-Fi positioning system

    Get PDF
    Wheelchairs are an important aid that enhances the mobility of people with several types of disabilities. Therefore, there has been considerable research and development on wheelchairs to meet the needs of the disabled. Since the early manual wheelchairs to their more recent electric powered counterparts, advancements have focused on improving autonomy in mobility. Other developments, such as Internet advancements, have developed the concept of the Internet of Things (IoT). This is a promising area that has been studied to enhance the independent operation of the electrical wheelchairs by enabling autonomous navigation and obstacle avoidance. This dissertation describes shortly the design of an autonomous wheelchair of the IPL/IT (Instituto Politécnico de Leiria/Instituto de Telecomunicações) with smart driving features for persons with visual impairments. The objective is to improve the prototype of an intelligent wheelchair. The first prototype of the wheelchair was built to control it by voice, ocular movements, and GPS (Global Positioning System). Furthermore, the IPL/IT wheelchair acquired a remote control feature which could prove useful for persons with low levels of visual impairment. This tele-assistance mode will be helpful to the family of the wheelchair user or, simply, to a health care assistant. Indoor and outdoor positioning systems, with printed directional Wi-Fi antennas, have been deployed to enable a precise location of our wheelchair. The underlying framework for the wheelchair system is the IPL/IT low cost autonomous wheelchair prototype that is based on IoT technology for improved affordability

    The development of fire detection robot

    Get PDF
    Bu tez çalışmasının amacı; özellikle endüstriyel alanlarda, erken yangın algılamada kullanılacak bir yangın algılama robotu tasarlamak ve imal etmektir. Bu robot; önceden belirlenen sanal güzergâh üzerinde engel algılama fonksiyonuyla ve yeniden programlanabilir hareket ünitesiyle devriye gezebilecek ve yangın kaynağını tespit edebilmek için ortam taraması yapabilecek şekilde tasarlanmış ve imal edilmiştir. Sistem; hareket planlama ünitesine tanımlanan programlar ile değişken devriye güzergâhlarını takip edebilme yeteneğine sahiptir. Robotun tasarım ve uygulama süreçleri şu şekildedir; mekanik sistemin tasarımı ve geliştirilmesi, elektronik sistemin tasarımı ve geliştirilmesi ve gerekli yazılımların hazırlanmasıdır. Mekanik sistemin tasarım ve geliştirilme sürecinde; taslak çizimleri, ölçülendirmeler ve üç boyutlu modelleme için bilgisayar destekli tasarım ve katı modelleme programları kullanılmıştır. Robotun taşıyıcı gövdesi; ucuz, sağlam ve kolay işlenebilir malzemeler olan ahşap ve sert plastik köpük kullanılarak imal edilmiştir. Robot sürüş sisteminde diferansiyel metot kullanılmıştır. Yarı otomatik robot dört adet fırçalı doğru akım motoru ile çalışmaktadır. Elektronik sistemin tasarımı ve geliştirilmesi sürecinde; hazır kart almak yerine ihtiyaca uygun elektronik veri kazanım ve kontrol devreleri tasarlanıp üretilmiştir. Bu devrelerin şematik diyagramı ve baskı devresi Proteus elektronik tasarım programı kullanılarak hazırlanmıştır. Bu devreler; motor hareketlerini kontrol etmekte ve dizüstü bilgisayar ile algılama üniteleri arasında bir köprü kurmakta kullanılmıştır. Yazılımların hazırlanma sürecinde; engel algılamada ve güzergâh takibinde kullanılacak akıllı yazılımlar geliştirilmiştir. Ayrıca daha güvenilir yangın algılama sağlamak için; çoklu sensör algılama ve değerlendirme algoritması geliştirilmiştir. Bu tezin sonucunda; özellikle endüstriyel alanlarda kullanılabilecek, çeşitli fonksiyonlara sahip bir yangın algılama robotu tasarlanıp imal edilmiştir. Yapılan testlerle; sistemin en fazla 100 cm mesafedeki yangını, robot 0,5 m/s hızla ilerlerken tespit edebildiği sonucuna varılmıştır.The aim of this thesis is to design and manufacture a fire detection robot that especially operates in industrial areas for fire inspection and early detection. Robot is designed and implemented to track prescribed paths with obstacle avoidance function through obstacle avoidance and motion planning units and to scan the environment in order to detect fire source using fire detection unit. Robot is able to track patrolling routes using virtual lines that defined to the motion planning unit. The design and implementation processes of the robot are as follow; the design and the development of mechanical, electronic systems and software. The design and the development of mechanical system; for the sketch drawings, dimensioning and solid state modeling of the robot, computer aided design and solid modelling computer programs were used. The carrier board of the robot is produced using wooden material and rigid plastic foam which are cheap, strong enough and easy to manufacture. Differential steering method is selected for semi-autonomous robot driving system and it is powered by four brushed DC (direct current) motors. The design and the development of electronic system; electronic circuits were designed and produced, instead of buying a commercial card. Both schematic diagrams and circuits of the data acquisition and control circuits are designed using Proteus electronic design program. These circuits are used to control the motion of the motors and establish a data flow between the laptop and the other peripheral sensing components. Software development; intelligent algorithms for obstacle avoidance and path tracking have been developed. A sensor data fusion algorithm for the sensors was also developed to get more reliable fire detection information. In conclusion; a fire inspection and detection robot with various functions to especially can be used in industrial areas was designed and manufactured. The functions of the robot were tested. It can be concluded that system is able to detect the fire source maximum 100 cm distance away while robot is moving with 0.5 m/s forward speed
    corecore