134 research outputs found

    Automatic Update of Airport GIS by Remote Sensing Image Analysis

    Get PDF
    This project investigates ways to automatically update Geographic Information Systems (GIS) for airports by analysis of Very High Resolution (VHR) remote sensing images. These GIS databases map the physical layout of an airport by representing a broad range of features (such as runways, taxiways and roads) as georeferenced vector objects. Updating such systems therefore involves both automatic detection of relevant objects from remotely sensed images, and comparison of these objects between bi-temporal images. The size of the VHR images and the diversity of the object types to be captured in the GIS databases makes this a very large and complex problem. Therefore we must split it into smaller parts which can be framed as instances of image processing problems. The aim of this project is to apply a range of methodologies to these problems and compare their results, providing quantitative data where possible. In this report, we devote a chapter to each sub-problem that was focussed on. Chapter 1 begins by introducing the background and motivation of the project, and describes the problem in more detail. Chapter 2 presents a method for detecting and segmenting runways, by detecting their distinctive markings and feeding them into a modified Hough transform. The algorithm was tested on a dataset of six bi-temporal remote sensing image pairs and validated against manually generated ground-truth GIS data, provided by Jeppesen. Chapter 3 investigates co-registration of bi-temporal images, as a necessary precursor to most direct change detection algorithms. Chapter 4 then tests a range of bi-temporal change detection algorithms (some standard, some novel) on co-registered images of airports, with the aim of producing a change heat-map which may assist a human operator in rapidly focussing attention on areas that have changed significantly. Chapter 5 explores a number of approaches to detecting curvilinear AMDB features such as taxilines and stopbars, by means of enhancing such features and suppressing others, prior to thresholding. Finally in Chapter 6 we develop a method for distinguishing between AMDB lines and other curvilinear structures that may occur in an image, by analysing the connectivity between such features and the runways

    Development of a Novel Dataset and Tools for Non-Invasive Fetal Electrocardiography Research

    Get PDF
    This PhD thesis presents the development of a novel open multi-modal dataset for advanced studies on fetal cardiological assessment, along with a set of signal processing tools for its exploitation. The Non-Invasive Fetal Electrocardiography (ECG) Analysis (NInFEA) dataset features multi-channel electrophysiological recordings characterized by high sampling frequency and digital resolution, maternal respiration signal, synchronized fetal trans-abdominal pulsed-wave Doppler (PWD) recordings and clinical annotations provided by expert clinicians at the time of the signal collection. To the best of our knowledge, there are no similar dataset available. The signal processing tools targeted both the PWD and the non-invasive fetal ECG, exploiting the recorded dataset. About the former, the study focuses on the processing aimed at the preparation of the signal for the automatic measurement of relevant morphological features, already adopted in the clinical practice for cardiac assessment. To this aim, a relevant step is the automatic identification of the complete and measurable cardiac cycles in the PWD videos: a rigorous methodology was deployed for the analysis of the different processing steps involved in the automatic delineation of the PWD envelope, then implementing different approaches for the supervised classification of the cardiac cycles, discriminating between complete and measurable vs. malformed or incomplete ones. Finally, preliminary measurement algorithms were also developed in order to extract clinically relevant parameters from the PWD. About the fetal ECG, this thesis concentrated on the systematic analysis of the adaptive filters performance for non-invasive fetal ECG extraction processing, identified as the reference tool throughout the thesis. Then, two studies are reported: one on the wavelet-based denoising of the extracted fetal ECG and another one on the fetal ECG quality assessment from the analysis of the raw abdominal recordings. Overall, the thesis represents an important milestone in the field, by promoting the open-data approach and introducing automated analysis tools that could be easily integrated in future medical devices

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Image Processing and Simulation Toolboxes of Microscopy Images of Bacterial Cells

    Get PDF
    Recent advances in microscopy imaging technology have allowed the characterization of the dynamics of cellular processes at the single-cell and single-molecule level. Particularly in bacterial cell studies, and using the E. coli as a case study, these techniques have been used to detect and track internal cell structures such as the Nucleoid and the Cell Wall and fluorescently tagged molecular aggregates such as FtsZ proteins, Min system proteins, inclusion bodies and all the different types of RNA molecules. These studies have been performed with using multi-modal, multi-process, time-lapse microscopy, producing both morphological and functional images. To facilitate the finding of relationships between cellular processes, from small-scale, such as gene expression, to large-scale, such as cell division, an image processing toolbox was implemented with several automatic and/or manual features such as, cell segmentation and tracking, intra-modal and intra-modal image registration, as well as the detection, counting and characterization of several cellular components. Two segmentation algorithms of cellular component were implemented, the first one based on the Gaussian Distribution and the second based on Thresholding and morphological structuring functions. These algorithms were used to perform the segmentation of Nucleoids and to identify the different stages of FtsZ Ring formation (allied with the use of machine learning algorithms), which allowed to understand how the temperature influences the physical properties of the Nucleoid and correlated those properties with the exclusion of protein aggregates from the center of the cell. Another study used the segmentation algorithms to study how the temperature affects the formation of the FtsZ Ring. The validation of the developed image processing methods and techniques has been based on benchmark databases manually produced and curated by experts. When dealing with thousands of cells and hundreds of images, these manually generated datasets can become the biggest cost in a research project. To expedite these studies in terms of time and lower the cost of the manual labour, an image simulation was implemented to generate realistic artificial images. The proposed image simulation toolbox can generate biologically inspired objects that mimic the spatial and temporal organization of bacterial cells and their processes, such as cell growth and division and cell motility, and cell morphology (shape, size and cluster organization). The image simulation toolbox was shown to be useful in the validation of three cell tracking algorithms: Simple Nearest-Neighbour, Nearest-Neighbour with Morphology and DBSCAN cluster identification algorithm. It was shown that the Simple Nearest-Neighbour still performed with great reliability when simulating objects with small velocities, while the other algorithms performed better for higher velocities and when there were larger clusters present

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    X-Ray Image Processing and Visualization for Remote Assistance of Airport Luggage Screeners

    Get PDF
    X-ray technology is widely used for airport luggage inspection nowadays. However, the ever-increasing sophistication of threat-concealment measures and types of threats, together with the natural complexity, inherent to the content of each individual luggage make x-ray raw images obtained directly from inspection systems unsuitable to clearly show various luggage and threat items, particularly low-density objects, which poses a great challenge for airport screeners. This thesis presents efforts spent in improving the rate of threat detection using image processing and visualization technologies. The principles of x-ray imaging for airport luggage inspection and the characteristics of single-energy and dual-energy x-ray data are first introduced. The image processing and visualization algorithms, selected and proposed for improving single energy and dual energy x-ray images, are then presented in four categories: (1) gray-level enhancement, (2) image segmentation, (3) pseudo coloring, and (4) image fusion. The major contributions of this research include identification of optimum combinations of common segmentation and enhancement methods, HSI based color-coding approaches and dual-energy image fusion algorithms —spatial information-based and wavelet-based image fusions. Experimental results generated with these image processing and visualization algorithms are shown and compared. Objective image quality measures are also explored in an effort to reduce the overhead of human subjective assessments and to provide more reliable evaluation results. Two application software are developed − an x-ray image processing application (XIP) and a wireless tablet PC-based remote supervision system (RSS). In XIP, we implemented in a user-friendly GUI the preceding image processing and visualization algorithms. In RSS, we ported available image processing and visualization methods to a wireless mobile supervisory station for screener assistance and supervision. Quantitative and on-site qualitative evaluations for various processed and fused x-ray luggage images demonstrate that using the proposed algorithms of image processing and visualization constitutes an effective and feasible means for improving airport luggage inspection

    Simple identification tools in FishBase

    Get PDF
    Simple identification tools for fish species were included in the FishBase information system from its inception. Early tools made use of the relational model and characters like fin ray meristics. Soon pictures and drawings were added as a further help, similar to a field guide. Later came the computerization of existing dichotomous keys, again in combination with pictures and other information, and the ability to restrict possible species by country, area, or taxonomic group. Today, www.FishBase.org offers four different ways to identify species. This paper describes these tools with their advantages and disadvantages, and suggests various options for further development. It explores the possibility of a holistic and integrated computeraided strategy

    A Multi-Anatomical Retinal Structure Segmentation System For Automatic Eye Screening Using Morphological Adaptive Fuzzy Thresholding

    Get PDF
    Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue to detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This thesis proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogenous anatomical structures

    AUTOMATED FEATURE EXTRACTION AND CONTENT-BASED RETRIEVAL OFPATHOLOGY MICROSCOPIC IMAGES USING K-MEANS CLUSTERING AND CODE RUN-LENGTH PROBABILITY DISTRIBUTION

    Get PDF
    The dissertation starts with an extensive literature survey on the current issues in content-based image retrieval (CBIR) research, the state-of-the-art theories, methodologies, and implementations, covering topics such as general information retrieval theories, imaging, image feature identification and extraction, feature indexing and multimedia database search, user-system interaction, relevance feedback, and performance evaluation. A general CBIR framework has been proposed with three layers: image document space, feature space, and concept space. The framework emphasizes that while the projection from the image document space to the feature space is algorithmic and unrestricted, the connection between the feature space and the concept space is based on statistics instead of semantics. The scheme favors image features that do not rely on excessive assumptions about image contentAs an attempt to design a new CBIR methodology following the above framework, k-means clustering color quantization is applied to pathology microscopic images, followed by code run-length probability distribution feature extraction. Kulback-Liebler divergence is used as distance measure for feature comparison. For content-based retrieval, the distance between two images is defined as a function of all individual features. The process is highly automated and the system is capable of working effectively across different tissues without human interference. Possible improvements and future directions have been discussed

    Non-linear dynamical analysis of biosignals

    Get PDF
    Biosignals are physiological signals that are recorded from various parts of the body. Some of the major biosignals are electromyograms (EMG), electroencephalograms (EEG) and electrocardiograms (ECG). These signals are of great clinical and diagnostic importance, and are analysed to understand their behaviour and to extract maximum information from them. However, they tend to be random and unpredictable in nature (non-linear). Conventional linear methods of analysis are insufficient. Hence, analysis using non-linear and dynamical system theory, chaos theory and fractal dimensions, is proving to be very beneficial. In this project, ECG signals are of interest. Changes in the normal rhythm of a human heart may result in different cardiac arrhythmias, which may be fatal or cause irreparable damage to the heart when sustained over long periods of time. Hence the ability to identify arrhythmias from ECG recordings is of importance for clinical diagnosis and treatment and also for understanding the electrophysiological mechanism of arrhythmias. To achieve this aim, algorithms were developed with the help of MATLAB® software. The classical logic of correlation was used in the development of algorithms to place signals into the various categories of cardiac arrhythmias. A sample set of 35 known ECG signals were obtained from the Physionet website for testing purposes. Later, 5 unknown ECG signals were used to determine the efficiency of the algorithms. A peak detection algorithm was written to detect the QRS complex. This complex is the most prominent waveform within an ECG signal and its shape, duration and time of occurrence provides valuable information about the current state of the heart. The peak detection algorithm gave excellent results with very good accuracy for all the downloaded ECG signals, and was developed using classical linear techniques. Later, a peak detection algorithm using the discrete wavelet transform (DWT) was implemented. This code was developed using nonlinear techniques and was amenable for implementation. Also, the time required for execution was reduced, making this code ideal for real-time processing. Finally, algorithms were developed to calculate the Kolmogorov complexity and Lyapunov exponent, which are nonlinear descriptors and enable the randomness and chaotic nature of ECG signals to be estimated. These measures of randomness and chaotic nature enable us to apply correct interrogative methods to the signal to extract maximum information. The codes developed gave fair results. It was possible to differentiate between normal ECGs and ECGs with ventricular fibrillation. The results show that the Kolmogorov complexity measure increases with an increase in pathology, approximately 12.90 for normal ECGs and increasing to 13.87 to 14.39 for ECGs with ventricular fibrillation and ventricular tachycardia. Similar results were obtained for Lyapunov exponent measures with a notable difference between normal ECG (0 – 0.0095) and ECG with ventricular fibrillation (0.1114 – 0.1799). However, it was difficult to differentiate between different types of arrhythmias.Biosignals are physiological signals that are recorded from various parts of the body. Some of the major biosignals are electromyograms (EMG), electroencephalograms (EEG) and electrocardiograms (ECG). These signals are of great clinical and diagnostic importance, and are analysed to understand their behaviour and to extract maximum information from them. However, they tend to be random and unpredictable in nature (non-linear). Conventional linear methods of analysis are insufficient. Hence, analysis using non-linear and dynamical system theory, chaos theory and fractal dimensions, is proving to be very beneficial. In this project, ECG signals are of interest. Changes in the normal rhythm of a human heart may result in different cardiac arrhythmias, which may be fatal or cause irreparable damage to the heart when sustained over long periods of time. Hence the ability to identify arrhythmias from ECG recordings is of importance for clinical diagnosis and treatment and also for understanding the electrophysiological mechanism of arrhythmias. To achieve this aim, algorithms were developed with the help of MATLAB® software. The classical logic of correlation was used in the development of algorithms to place signals into the various categories of cardiac arrhythmias. A sample set of 35 known ECG signals were obtained from the Physionet website for testing purposes. Later, 5 unknown ECG signals were used to determine the efficiency of the algorithms. A peak detection algorithm was written to detect the QRS complex. This complex is the most prominent waveform within an ECG signal and its shape, duration and time of occurrence provides valuable information about the current state of the heart. The peak detection algorithm gave excellent results with very good accuracy for all the downloaded ECG signals, and was developed using classical linear techniques. Later, a peak detection algorithm using the discrete wavelet transform (DWT) was implemented. This code was developed using nonlinear techniques and was amenable for implementation. Also, the time required for execution was reduced, making this code ideal for real-time processing. Finally, algorithms were developed to calculate the Kolmogorov complexity and Lyapunov exponent, which are nonlinear descriptors and enable the randomness and chaotic nature of ECG signals to be estimated. These measures of randomness and chaotic nature enable us to apply correct interrogative methods to the signal to extract maximum information. The codes developed gave fair results. It was possible to differentiate between normal ECGs and ECGs with ventricular fibrillation. The results show that the Kolmogorov complexity measure increases with an increase in pathology, approximately 12.90 for normal ECGs and increasing to 13.87 to 14.39 for ECGs with ventricular fibrillation and ventricular tachycardia. Similar results were obtained for Lyapunov exponent measures with a notable difference between normal ECG (0 – 0.0095) and ECG with ventricular fibrillation (0.1114 – 0.1799). However, it was difficult to differentiate between different types of arrhythmias
    • …
    corecore