1,988 research outputs found

    Optimized Flight Path for Localization Using Line of Bearing

    Get PDF
    This research develops optimized flight paths for localization of a target using LOB measurements. The target area is expressed as an error ellipse using the measurement errors of the LOBs. The optimization approach is focused on minimizing the size of the error ellipse. The algorithm for the optimized path is generated and compared with typical flight paths. The optimization routine is based on the results revised from previous similar research in the literature. A geometrical method to estimate the error ellipse is combined with optimal control in this research. Each LOB gives a possible target area and this target area can be reduced by overlapping areas developed from multiple LOBs. The algorithm based on this method is tested with a single target and with multiple targets in simulation. In addition to analytical simulations of the proposed method, a real-world test is conducted using a remotely controlled truck. From the simulation and a real-world test, the change of the semi-major axis of the error ellipse with increasing number of measurements and the total number of measurements needed for to achieved predefined semi-major axis are verified

    Optimal sensor arrangements in Angle of Arrival (AoA) and range based localization with linear sensor arrays

    Get PDF
    This paper investigates the linear separation requirements for Angle-of-Arrival (AoA) and range sensors, in order to achieve the optimal performance in estimating the position of a target from multiple and typically noisy sensor measurements. We analyse the sensor-target geometry in terms of the Cramer–Rao inequality and the corresponding Fisher information matrix, in order to characterize localization performance with respect to the linear spatial distribution of sensors. Here in this paper, we consider both fixed and adjustable linear sensor arrays

    Autonomous Swarm Navigation

    Get PDF
    Robotic swarm systems attract increasing attention in a wide variety of applications, where a multitude of self-organized robotic entities collectively accomplish sensing or exploration tasks. Compared to a single robot, a swarm system offers advantages in terms of exploration speed, robustness against single point of failures, and collective observations of spatio-temporal processes. Autonomous swarm navigation, including swarm self-localization, the localization of external sources, and swarm control, is essential for the success of an autonomous swarm application. However, as a newly emerging technology, a thorough study of autonomous swarm navigation is still missing. In this thesis, we systematically study swarm navigation systems, particularly emphasizing on their collective performance. The general theory of swarm navigation as well as an in-depth study on a specific swarm navigation system proposed for future Mars exploration missions are covered. Concerning swarm localization, a decentralized algorithm is proposed, which achieves a near-optimal performance with low complexity for a dense swarm network. Regarding swarm control, a position-aware swarm control concept is proposed. The swarm is aware of not only the position estimates and the estimation uncertainties of itself and the sources, but also the potential motions to enrich position information. As a result, the swarm actively adapts its formation to improve localization performance, without losing track of other objectives, such as goal approaching and collision avoidance. The autonomous swarm navigation concept described in this thesis is verified for a specific Mars swarm exploration system. More importantly, this concept is generally adaptable to an extensive range of swarm applications

    A new Measure for Optimization of Field Sensor Network with Application to LiDAR

    Get PDF
    This thesis proposes a solution to the problem of modeling and optimizing the field sensor network in terms of the coverage performance. The term field sensor is referred to a class of sensors which can detect the regions in 2D/3D spaces through non-contact measurements. The most widely used field sensors include cameras, LiDAR, ultrasonic sensor, and RADAR, etc. The key challenge in the applications of field sensor networks, such as area coverage, is to develop an effective performance measure, which has to involve both sensor and environment parameters. The nature of space distribution in the case of the field sensor incurs a great deal of difficulties for such development and, hence, poses it as a very interesting research problem. Therefore, to tackle this problem, several attempts have been made in the literature. However, they have failed to address a comprehensive and applicable approach to distinctive types of field sensors (in 3D), as only coverage of a particular sensor is usually addressed at the time. In addition, no coverage model has been proposed yet for some types of field sensors such as LiDAR sensors. In this dissertation, a coverage model is obtained for the field sensors based on the transformation of sensor and task parameters into the sensor geometric model. By providing a mathematical description of the sensor’s sensing region, a performance measure is introduced which characterizes the closeness between a single sensor and target configurations. In this regard, the first contribution is developing an Infinity norm based measure which describes the target distance to the closure of the sensing region expressed by an area-based approach. The second contribution can be geometrically interpreted as mapping the sensor’s sensing region to an n-ball using a homeomorphism map and developing a performance measure. The third contribution is introducing the measurement principle and establishing the coverage model for the class of solid-state (flash) LiDAR sensors. The fourth contribution is point density analysis and developing the coverage model for the class of mechanical (prism rotating mechanism) LiDAR sensors. Finally, the effectiveness of the proposed coverage model is illustrated by simulations, experiments, and comparisons is carried out throughout the dissertation. This coverage model is a powerful tool as it applies to the variety of field sensors

    Towards End-to-End Acoustic Localization using Deep Learning: from Audio Signal to Source Position Coordinates

    Full text link
    This paper presents a novel approach for indoor acoustic source localization using microphone arrays and based on a Convolutional Neural Network (CNN). The proposed solution is, to the best of our knowledge, the first published work in which the CNN is designed to directly estimate the three dimensional position of an acoustic source, using the raw audio signal as the input information avoiding the use of hand crafted audio features. Given the limited amount of available localization data, we propose in this paper a training strategy based on two steps. We first train our network using semi-synthetic data, generated from close talk speech recordings, and where we simulate the time delays and distortion suffered in the signal that propagates from the source to the array of microphones. We then fine tune this network using a small amount of real data. Our experimental results show that this strategy is able to produce networks that significantly improve existing localization methods based on \textit{SRP-PHAT} strategies. In addition, our experiments show that our CNN method exhibits better resistance against varying gender of the speaker and different window sizes compared with the other methods.Comment: 18 pages, 3 figures, 8 table

    Sensor for Distance Measurement Using Pixel Grey-Level Information

    Get PDF
    An alternative method for distance measurement is presented, based on a radiometric approach to the image formation process. The proposed methodology uses images from an infrared emitting diode (IRED) to estimate the distance between the camera and the IRED. Camera output grey-level intensities are a function of the accumulated image irradiance, which is also related by inverse distance square law to the distance between the camera and the IRED. Analyzing camera-IRED distance, magnitudes that affected image grey-level intensities, and therefore accumulated image irradiance, were integrated into a differential model which was calibrated and used for distance estimation over a 200 to 600 cm range. In a preliminary model, the camera and the emitter were aligned

    Detecting relative amplitude of IR signals with active sensors and its application to a positioning system

    Get PDF
    Nowadays, there is an increasing interest in smart systems, e.g., smart metering or smart spaces, for which active sensing plays an important role. In such systems, the sample or environment to be measured is irradiated with a signal (acoustic, infrared, radio‐frequency…) and some of their features are determined from the transmitted or reflected part of the original signal. In this work, infrared (IR) signals are emitted from different sources (four in this case) and received by a unique quadrature angular diversity aperture (QADA) sensor. A code division multiple access (CDMA) technique is used to deal with the simultaneous transmission of all the signals and their separation (depending on the source) at the receiver’s processing stage. Furthermore, the use of correlation techniques allows the receiver to determine the amount of energy received from each transmitter, by quantifying the main correlation peaks. This technique can be used in any system requiring active sensing; in the particular case of the IR positioning system presented here, the relative amplitudes of those peaks are used to determine the central incidence point of the light from each emitter on the QADA. The proposal tackles the typical phenomena, such as distortions caused by the transducer impulse response, the near‐far effect in CDMA‐based systems, multipath transmissions, the correlation degradation from non‐coherent demodulations, etc. Finally, for each emitter, the angle of incidence on the QADA receiveris estimated, assuming that it is on a horizontal plane, although with any rotation on the vertical axis Z. With the estimated angles and the known positions of the LED emitters, the position (x, y, z) of the receiver is determined. The system is validated at different positions in a volume of 3 × 3 × 3.4 m3 obtaining average errors of 7.1, 5.4, and 47.3 cm in the X, Y and Z axes, respectively.Agencia Estatal de InvestigaciónUniversidad de AlcaláJunta de Comunidades de Castilla-La Manch

    mmWave V2V Localization in MU-MIMO Hybrid Beamforming

    Get PDF
    Recent trends for vehicular localization in millimetre-wave (mmWave) channels include employing a combination of parameters such as angle of arrival (AOA), angle of departure (AOD), and time of arrival (TOA) of the transmitted/received signals. These parameters are challenging to estimate, which along with the scattering and random nature of mmWave channels, and vehicle mobility lead to errors in localization. To circumvent these challenges, this paper proposes mmWave vehicular localization employing difference of arrival for time and frequency, with multiuser (MU) multiple-input-multiple-output (MIMO) hybrid beamforming; rather than relying on AOD/AOA/TOA estimates. The vehicular localization can exploit the number of vehicles present, as an increase in a number of vehicles reduces the Cramr-Rao bound (CRB) of error estimation. At 10 dB signal-to-noise ratio (SNR) both spatial multiplexing and beamforming result in comparable localization errors. At lower SNR values, spatial multiplexing leads to larger errors compared to beamforming due to formation of spurious peaks in the cross ambiguity function. Accuracy of the estimated parameters is improved by employing an extended Kalman filter leading to a root mean square (RMS) localization error of approximately 6.3 meters
    corecore