769 research outputs found

    A Detailed Investigation into Low-Level Feature Detection in Spectrogram Images

    Get PDF
    Being the first stage of analysis within an image, low-level feature detection is a crucial step in the image analysis process and, as such, deserves suitable attention. This paper presents a systematic investigation into low-level feature detection in spectrogram images. The result of which is the identification of frequency tracks. Analysis of the literature identifies different strategies for accomplishing low-level feature detection. Nevertheless, the advantages and disadvantages of each are not explicitly investigated. Three model-based detection strategies are outlined, each extracting an increasing amount of information from the spectrogram, and, through ROC analysis, it is shown that at increasing levels of extraction the detection rates increase. Nevertheless, further investigation suggests that model-based detection has a limitation—it is not computationally feasible to fully evaluate the model of even a simple sinusoidal track. Therefore, alternative approaches, such as dimensionality reduction, are investigated to reduce the complex search space. It is shown that, if carefully selected, these techniques can approach the detection rates of model-based strategies that perform the same level of information extraction. The implementations used to derive the results presented within this paper are available online from http://stdetect.googlecode.com

    Simultaneous localization and mapping with limited sensing using Extended Kalman Filter and Hough transform

    Get PDF
    Problem robota da izradi kartu nepoznatog okruženja uz ispravljanje vlastitog položaja na temelju iste karte i podataka senzora naziva se problem simultanog lokaliziranja i kartiranja (mapiranja). Budući da je točnost i preciznost senzora od velike važnosti u rješavanju tog problema, većina predloženih sustava uključuje primjenu skupih laserskih senzora daljine te relativno novije i jeftinije RGB-D kamere. Laserski senzori daljine su preskupi za neke primjene, a RGB-D kamere imaju veliku snagu, CPU ili sve što je potrebno za obradu podataka direktno ili na PC-u. Za izradu jeftinog robota bolje je primijeniti senzore niske cijene (poput infracrvenih ili sonarnih). Cilj je ovoga rada izraditi kartu nepoznatog okruženja uz primjenu jeftinog robota, produljenog Kalman filtra i linearnih obilježja, kao što su zidovi i namještaj. Ovdje se također predlaže pristup zatvaranja petlje. Eksperimenti su provedeni u okruženju Webots simulacije.The problem of a robot to create a map of an unknown environment while correcting its own position based on the same map and sensor data is called Simultaneous Localization and Mapping problem. As the accuracy and precision of the sensors have an important role in this problem, most of the proposed systems include the usage of high cost laser range sensors, and relatively newer and cheaper RGB-D cameras. Laser range sensors are too expensive for some implementations, and RGB-D cameras bring high power, CPU or communication requirements to process data on-board or on a PC. In order to build a low-cost robot it is more appropriate to use low-cost sensors (like infrared and sonar). In this study it is aimed to create a map of an unknown environment using a low cost robot, Extended Kalman Filter and linear features like walls and furniture. A loop closing approach is also proposed here. Experiments are performed in Webots simulation environment

    Region-based license plate detection

    Full text link
    Automatic license plate recognition (ALPR) is one of the most important aspects of applying computer techniques towards intelligent transportation systems. In order to recognize a license plate efficiently, however, the location of the license plate, in most cases, must be detected in the first place. Due to this reason, detecting the accurate location of a license plate from a vehicle image is considered to be the most crucial step of an ALPR system, which greatly affects the recognition rate and speed of the whole system. In this paper, a region-based license plate detection method is proposed. In this method, firstly, mean shift is used to filter and segment a color vehicle image in order to get candidate regions. These candidate regions are then analyzed and classified in order to decide whether a candidate region contains a license plate. Unlike other existing license plate detection methods, the proposed method focuses on regions, which demonstrates to be more robust to interference characters and more accurate when compared with other methods. Š 2006 Elsevier Ltd. All rights reserved

    Leveraging External Sensor Data for Enhanced Space Situational Awareness

    Get PDF
    Reliable Space Situational Awareness (SSA) is a recognized requirement in the current congested, contested, and competitive environment of space operations. A shortage of available sensors and reliable data sources are some current limiting factors for maintaining SSA. Unfortunately, cost constraints prohibit drastically increasing the sensor inventory. Alternative methods are sought to enhance current SSA, including utilizing non-traditional data sources (external sensors) to perform basic SSA catalog maintenance functions. Astronomical data, for example, routinely collects serendipitous satellite streaks in the course of observing deep space; but tactics, techniques, and procedures designed to glean useful information from those collects have yet to be rigorously developed. This work examines the feasibility and utility of performing ephemeris positional updates for a Resident Space Object (RSO) catalog using metric data obtained from RSO streaks gathered by astronomical telescopes. The focus of this work is on processing data from three possible streak categories: streaks that only enter, only exit, or cross completely through the astronomical image. Successful use of this data will aid in resolving uncorrelated tracks, space object identification, and threat detection. Incorporation of external data sources will also reduce the number of routine collects required by existing SSA sensors, freeing them up for more demanding tasks. The results clearly demonstrate that accurate orbital reconstruction can be performed using an RSO streak in a distorted image, without applying calibration frames and that partially bound streaks provide similar results to traditional data, with a mean degradation of 6:2% in right ascension and 42:69% in declination. The methodology developed can also be applied to dedicated SSA sensors to extract data from serendipitous streaks gathered while observing other RSOs

    Automatic Fracture Orientation Extraction from SfM Point Clouds

    Get PDF
    Geology seeks to understand the history of the Earth and its surface processes through charac- terisation of surface formations and rock units. Chief among the geologists’ tools are rock unit orientation measurements, such as Strike, Dip and Dip Direction. These allow an understanding of both surface and sub-structure on both the local and macro scale. Although the way these techniques can be used to characterise geology are well understood, the need to collect these measurements by hand adds time and expense to the work of the geologist, precludes spontaneity in field work, and coverage is limited to where the geologist can physically reach. In robotics and computer vision, multi-view geometry techniques such as Structure from Motion (SfM) allows reconstructions of objects and scenes using multiple camera views. SfM-based techniques provide advantages over Lidar-type techniques, in areas such as cost and flexibility of use in more varied environmental conditions, while sacrificing extreme levels of fidelity. Regardless of this, camera based techniques such as SfM, have developed to the point where accuracy is possible in the decimetre range. Here is presented a system to automate the measurement of Strike, Dip and Dip Direction using multi-view geometry from video. Rather than deriving measurements using a method applied to the images, such as the Hough Transform, this method takes measurements directly from the software generated point cloud. Point cloud noise is mitigated using a Mahalanobis distance implementation. Significant structure is characterised using a k-nearest neighbour region growing algorithm, and final surface orientations are quantified using the plane, and normal direction cosines

    Automated retinal analysis

    Get PDF
    Diabetes is a chronic disease affecting over 2% of the population in the UK [1]. Long-term complications of diabetes can affect many different systems of the body including the retina of the eye. In the retina, diabetes can lead to a disease called diabetic retinopathy, one of the leading causes of blindness in the working population of industrialised countries. The risk of visual loss from diabetic retinopathy can be reduced if treatment is given at the onset of sight-threatening retinopathy. To detect early indicators of the disease, the UK National Screening Committee have recommended that diabetic patients should receive annual screening by digital colour fundal photography [2]. Manually grading retinal images is a subjective and costly process requiring highly skilled staff. This thesis describes an automated diagnostic system based oil image processing and neural network techniques, which analyses digital fundus images so that early signs of sight threatening retinopathy can be identified. Within retinal analysis this research has concentrated on the development of four algorithms: optic nerve head segmentation, lesion segmentation, image quality assessment and vessel width measurements. This research amalgamated these four algorithms with two existing techniques to form an integrated diagnostic system. The diagnostic system when used as a 'pre-filtering' tool successfully reduced the number of images requiring human grading by 74.3%: this was achieved by identifying and excluding images without sight threatening maculopathy from manual screening

    Adaptive object segmentation and tracking

    Get PDF
    Efficient tracking of deformable objects moving with variable velocities is an important current research problem. In this thesis a robust tracking model is proposed for the automatic detection, recognition and tracking of target objects which are subject to variable orientations and velocities and are viewed under variable ambient lighting conditions. The tracking model can be applied to efficiently track fast moving vehicles and other objects in various complex scenarios. The tracking model is evaluated on both colour visible band and infra-red band video sequences acquired from the air by the Sussex police helicopter and other collaborators. The observations made validate the improved performance of the model over existing methods. The thesis is divided in three major sections. The first section details the development of an enhanced active contour for object segmentation. The second section describes an implementation of a global active contour orientation model. The third section describes the tracking model and assesses it performance on the aerial video sequences. In the first part of the thesis an enhanced active contour snake model using the difference of Gaussian (DoG) filter is reported and discussed in detail. An acquisition method based on the enhanced active contour method developed that can assist the proposed tracking system is tested. The active contour model is further enhanced by the use of a disambiguation framework designed to assist multiple object segmentation which is used to demonstrate that the enhanced active contour model can be used for robust multiple object segmentation and tracking. The active contour model developed not only facilitates the efficient update of the tracking filter but also decreases the latency involved in tracking targets in real-time. As far as computational effort is concerned, the active contour model presented improves the computational cost by 85% compared to existing active contour models. The second part of the thesis introduces the global active contour orientation (GACO) technique for statistical measurement of contoured object orientation. It is an overall object orientation measurement method which uses the proposed active contour model along with statistical measurement techniques. The use of the GACO technique, incorporating the active contour model, to measure object orientation angle is discussed in detail. A real-time door surveillance application based on the GACO technique is developed and evaluated on the i-LIDS door surveillance dataset provided by the UK Home Office. The performance results demonstrate the use of GACO to evaluate the door surveillance dataset gives a success rate of 92%. Finally, a combined approach involving the proposed active contour model and an optimal trade-off maximum average correlation height (OT-MACH) filter for tracking is presented. The implementation of methods for controlling the area of support of the OT-MACH filter is discussed in detail. The proposed active contour method as the area of support for the OT-MACH filter is shown to significantly improve the performance of the OT-MACH filter's ability to track vehicles moving within highly cluttered visible and infra-red band video sequence

    Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments

    Get PDF
    In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly, and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse short-term precise IMU measurements with long-term accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. Fortunately, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly mitigates errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to other traditional aiding sensor configurations. This paper provides the theoretical basis for the new aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation of the new algorithm, thereby verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft
    • …
    corecore