756 research outputs found

    Accumulator-free Hough Transform for Sequence Collinear Points

    Get PDF
    The perception, localization, and navigation of its environment are essential for autonomous mobile robots and vehicles. For that reason, a 2D Laser rangefinder sensor is used popularly in mobile robot applications to measure the origin of the robot to its surrounding objects. The measurement data generated by the sensor is transmitted to the controller, where the data is processed by one or multiple suitable algorithms in several steps to extract the desired information. Universal Hough Transform (UHT) is one of the appropriate and popular algorithms to extract the primitive geometry such as straight line, which later will be used in the further step of data processing. However, the UHT has high computational complexity and requires the so-called accumulator array, which is less suitable for real-time applications where a high speed and low complexity computation is highly demanded. In this study, an Accumulator-free Hough Transform (AfHT) is proposed to reduce the computational complexity and eliminate the need for the accumulator array. The proposed algorithm is validated using the measurement data from a 2D laser scanner and compared to the standard Hough Transform. As a result, the extracted value of AfHT shows a good agreement with that of UHT but with a significant reduction in the complexity of the computation and the need for computer memory

    Explaining Away Results in Accurate and Tolerant Template Matching

    Get PDF

    Novel Methodologies for Pattern Recognition of Charged Particle Trajectories in the ATLAS Detector

    Get PDF
    By 2029, the Large Hadron Collider will enter its High Luminosity phase (HL- LHC) in order to achieve an unprecedented capacity for discovery. As this phase is entered, it is essential for many physics analyses that the efficiency of the re- construction of charged particle trajectories in the ATLAS detector is maintained. With levels of pile-up expected to reach = 200, the number of track candidates that must be processed will increase exponentially in the current pattern matching regime. In this thesis, a novel method for charged particle pattern recognition is developed based on the popular computer vision technique known as the Hough Transform (HT). Our method differs from previous attempts to use the HT for tracking in its data-driven choice of track parameterisation using Principal Component Analysis (PCA), and the division of the detector space in to very narrow tunnels known as sectors. This results in well-separated Hough images across the layers of the detector and relatively little noise from pile-up. Additionally, we show that the memory requirements for a pattern-based track finding algorithm can be reduced by approximately a factor of 5 through a two-stage compression process, without sacrificing any significant track finding efficiency. The new tracking algorithm is compared with an existing pattern matching algorithm, which consists of matching detector hits to a collection of pre-defined patterns of hits generated from simulated muon tracks. The performance of our algorithm is shown to achieve similar track finding efficiency while reducing the number of track candidates per event

    Алгоритм обнаружения треков на основе вычисления корреляции следов в аккумуляторе Хафа

    Get PDF
    Introduction. Track detection is one of the main tasks to be solved in trajectory processing. This task can be efficiently solved using the Hough Transform. A track is considered detected if the number of position measurements received in a number of consecutive radar scans and falling into the same cell of the parameter space (accumulator) has exceeded the detection threshold. However, the effective practical application of the Hough transform requires a sufficiently long time of measurement. Under a small number of scans given for track detection, measurements are also accumulated in those accumulator cells where their traces intersect. Therefore, in order to detect true tracks, additional processing is required to distinguish measurement clusters from different targets based on their geometric proximity. In addition, a large amount of memory and computational operations for the accumulator maintenance significantly increase the computation load of the trajectory processor.Aim. To design a simple and false-detection resilient algorithm for detecting tracks without the Hough accumulator in the processor memory.Materials and methods. In the proposed algorithm, the construction of measurement traces in the Hough accumulator followed by selection of cells with the largest number of traces passed through them is replaced by computation of the cross correlations of the traces and clustering of measurements based on the maximum similarity of their traces.Results. Mathematical simulation with the scenario parameters selected in the paper confirmed the accuracy of the proposed algorithm in detecting all tracks existing in the radar field of view and its efficiency in conducting error free association of target position measurements.Conclusion. A false-detection resilient algorithm for track detection was created based on the Hough transform. The algorithm does not require the Hough accumulator in the processor memory.Введение. Обнаружение треков – одна из основных задач, решаемых системой траекторной обработки (СТО). Известным и эффективным методом обнаружения треков является преобразование Хафа (Hough Transform). Трек считается обнаруженным, если количество отметок, полученных в последовательных обзорах и попавших в одну и ту же область пространства параметров (аккумулятора), превысило порог обнаружения. Однако, как показывает практика, для эффективного применения преобразования Хафа в СТО необходимо достаточно большое время накопления отметок. При малом количестве обзоров, отведенных для обнаружения треков, отметки целей также накапливаются в тех ячейках аккумулятора, где происходят пересечения их следов. Поэтому для обнаружения треков необходима дополнительная обработка, позволяющая выделить кластеры отметок от целей по признаку их геометрической близости. Кроме того, большой объем памяти и вычислительных операций по обслуживанию аккумулятора в значительной мере увеличивают нагрузку вычислителя СТО.Цель работы. Получение простого и устойчивого к ложным обнаружениям алгоритма завязки треков на основе преобразования Хафа без создания в памяти вычислителя аккумулятора.Материалы и методы. В предлагаемом алгоритме построение следов отметок в аккумуляторе с последующим выделением ячеек с максимальным количеством прошедших через них следов заменено на вычисление взаимных корреляций следов и кластеризации отметок по признаку максимального подобия следов.Результаты. Математическое моделирование при выбранных в работе сценарных параметрах подтвердило, что предлагаемый алгоритм правильно обнаружил все существующие в зоне ответственности СТО треки и осуществил безошибочное объединение отметок целей.Заключение. Создан помехоустойчивый алгоритм обнаружения треков, не требующий организации в памяти вычислителя аккумулятора Хафа

    Heuristic 3d Reconstruction Of Irregular Spaced Lidar

    Get PDF
    As more data sources have become abundantly available, an increased interest in 3D reconstruction has emerged in the image processing academic community. Applications for 3D reconstruction of urban and residential buildings consist of urban planning, network planning for mobile communication, tourism information systems, spatial analysis of air pollution and noise nuisance, microclimate investigations, and Geographical Information Systems (GISs). Previous, classical, 3D reconstruction algorithms solely utilized aerial photography. With the advent of LIDAR systems, current algorithms explore using captured LIDAR data as an additional feasible source of information for 3D reconstruction. Preprocessing techniques are proposed for the development of an autonomous 3D Reconstruction algorithm. The algorithm is designed for autonomously deriving three dimensional models of urban and residential buildings from raw LIDAR data. First, a greedy insertion triangulation algorithm, modified with a proposed noise filtering technique, triangulates the raw LIDAR data. The normal vectors of those triangles are then passed to an unsupervised clustering algorithm – Fuzzy Simplified Adaptive Resonance Theory (Fuzzy SART). Fuzzy SART returns a rough grouping of coplanar triangles. A proposed multiple regression algorithm then further refines the coplanar grouping by further removing outliers and deriving an improved planar segmentation of the raw LIDAR data. Finally, further refinement is achieved by calculating the intersection of the best fit roof planes and moving nearby points close to that intersection to exist at the intersection, resulting in straight roof ridges. The end result of the aforementioned techniques culminates in a well defined model approximating the considered building depicted by the LIDAR data

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    The Discriminative Generalized Hough Transform for Localization of Highly Variable Objects and its Application for Surveillance Recordings

    Get PDF
    This work is about the localization of arbitrary objects in 2D images in general and the localization of persons in video surveillance recordings in particular. More precisely, it is about localizing specific landmarks. Thereby the possibilities and limitations of localization approaches based on the Generalized Hough Transform (GHT), especially of the Discriminative Generalized Hough Transform (DGHT) will be evaluated. GHT-based approaches determine the number of matching model and feature points and the most likely target point position is given by the highest number of matching model and feature points. Additionally, the DGHT comprises a statistical learning approach to generate optimal DGHT-models achieving good results on medical images. This work will show that the DGHT is not restricted to medical tasks but has issues with large target object variabilities, which are frequent in video surveillance tasks. As all GHT-based approaches also the DGHT only considers the number of matching model-feature-point-combinations, which means that all model points are treated independently. This work will show that model points are not independent of each other and considering them independently will result in high error rates. This drawback is analyzed and a universal solution, which is not only applicable for the DGHT but all GHT-based approaches, is presented. This solution is based on an additional classifier that takes the whole set of matching model-feature-point-combinations into account to estimate a confidence score. On all tested databases, this approach could reduce the error rates drastically by up to 94.9%. Furthermore, this work presents a general approach for combining multiple GHT-models into a deeper model. This can be used to combine the localization results of different object landmarks such as mouth, nose, and eyes. Similar to Convolutional Neural Networks (CNNs) this will split the target object variability into multiple and smaller variabilities. A comparison of GHT-based approaches with CNNs and a description of the advantages, disadvantages, and potential application of both approaches will conclude this work.Diese Arbeit beschäftigt sich im Allgemeinen mit der Lokalisierung von Objekten in 2D Bilddaten und im Speziellen mit der Lokalisierung von Personen in Videoüberwachungsaufnahmen. Genauer gesagt handelt es sich hierbei um die Lokalisierung spezieller Landmarken. Dabei werden die Möglichkeiten und Limiterungen von Lokalisierungsverfahren basierend auf der Generalisierten Hough Transformation (GHT) untersucht, insbesondere die der Diskriminativen Generalisierten Hough Transformation (DGHT). Bei GHT-basierten Ansätze wird die Anzahl an übereinstimmenden Modelpunkten und Merkmalspunkten ermittelt und die wahrscheinlicheste Objekt-Position ergibt sich aus der höchsten Anzahl an übereinstimmenden Model- und Merkmalspunkte. Die DGHT umfasst darüber hinaus noch ein statistisches Lernverfahren, um optimale DGHT-Modele zu erzeugen und erzielte damit auf medizinischen Bilder und Anwendungen sehr gute Erfolge. Wie sich in dieser Arbeit zeigen wird, ist die DGHT nicht auf medizinische Anwendungen beschränkt, hat allerdings Schwierigkeiten große Variabilität der Ziel-Objekte abzudecken, wie sie in Überwachungsszenarien zu erwarten sind. Genau wie alle GHT-basierten Ansätze leidet auch die DGHT unter dem Problem, dass lediglich die Anzahl an übereinstimmenden Model- und Merkmalspunkten ermittelt wird, was bedeutet, dass alle Modelpunkte unabhängig voneinander betrachtet werden. Dass Modelpunkte nicht unabhängig voneinander sind, wird im Laufe dieser Arbeit gezeigt werden, und die unabhängige Betrachtung führt gerade bei sehr variablen Zielobjekten zu einer hohen Fehlerrate. Dieses Problem wird in dieser Arbeit grundlegend untersucht und ein allgemeiner Lösungsansatz vorgestellt, welcher nicht nur für die DGHT sondern grundsätzlich für alle GHT-basierten Verfahren Anwendung finden kann. Die Lösung basiert auf der Integration eines zusätzlichen Klassifikators, welcher die gesamte Menge an übereinstimmenden Model- und Merkmalspunkten betrachtet und anhand dessen ein zusätzliches Konfidenzmaß vergibt. Dadurch konnte auf allen getesteten Datenbanken eine deutliche Reduktion der Fehlerrate erzielt werden von bis zu 94.9%. Darüber hinaus umfasst die Arbeit einen generellen Ansatz zur Kombination mehrere GHT-Model in einem tieferen Model. Dies kann dazu verwendet werden, um die Lokalisierungsergebnisse verschiedener Objekt-Landmarken zu kombinieren, z. B. die von Mund, Nase und Augen. Ähnlich wie auch bei Convolutional Neural Networks (CNNs) ist es damit möglich über mehrere Ebenen unterschiedliche Bereiche zu lokalisieren und somit die Variabilität des Zielobjektes in mehrere, leichter zu handhabenden Variabilitäten aufzuspalten. Abgeschlossen wird die Arbeit durch einen Vergleich von GHT-basierten Ansätzen mit CNNs und einer Beschreibung der Vor- und Nachteile und mögliche Einsatzfelder beider Verfahren
    corecore