422 research outputs found

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Bio-inspired log-polar based color image pattern analysis in multiple frequency channels

    Get PDF
    The main topic addressed in this thesis is to implement color image pattern recognition based on the lateral inhibition subtraction phenomenon combined with a complex log-polar mapping in multiple spatial frequency channels. It is shown that the individual red, green and blue channels have different recognition performances when put in the context of former work done by Dragan Vidacic. It is observed that the green channel performs better than the other two channels, with the blue channel having the poorest performance. Following the application of a contrast stretching function the object recognition performance is improved in all channels. Multiple spatial frequency filters were designed to simulate the filtering channels that occur in the human visual system. Following these preprocessing steps Dragan Vidacic\u27s methodology is followed in order to determine the benefits that are obtained from the preprocessing steps being investigated. It is shown that performance gains are realized by using such preprocessing steps

    Computational Depth-resolved Imaging and Metrology

    Get PDF
    In this thesis, the main research challenge boils down to extracting 3D spatial information of an object from 2D measurements using light. Our goal is to achieve depth-resolved tomographic imaging of transparent or semi-transparent 3D objects, and to perform topography characterization of rough surfaces. The essential tool we used is computational imaging, where depending on the experimental scheme, often indirect measurements are taken, and tailored algorithms are employed to perform image reconstructions. The computational imaging approach enables us to relax the hardware requirement of an imaging system, which is essential when using light in the EUV and x-ray regimes, where high-quality optics are not readily available. In this thesis, visible and infrared light sources are used, where computational imaging also offers several advantages. First of all, it often leads to a simple, flexible imaging system with low cost. In the case of a lensless configuration, where no lenses are involved in the final image-forming stage between the object and the detector, aberration-free image reconstructions can be obtained. More importantly, computational imaging provides quantitative reconstructions of scalar electric fields, enabling phase imaging, numerical refocus, as well as 3D imaging

    Optical Coherence Tomography and Its Non-medical Applications

    Get PDF
    Optical coherence tomography (OCT) is a promising non-invasive non-contact 3D imaging technique that can be used to evaluate and inspect material surfaces, multilayer polymer films, fiber coils, and coatings. OCT can be used for the examination of cultural heritage objects and 3D imaging of microstructures. With subsurface 3D fingerprint imaging capability, OCT could be a valuable tool for enhancing security in biometric applications. OCT can also be used for the evaluation of fastener flushness for improving aerodynamic performance of high-speed aircraft. More and more OCT non-medical applications are emerging. In this book, we present some recent advancements in OCT technology and non-medical applications

    Improved 3D MR Image Acquisition and Processing in Congenital Heart Disease

    Get PDF
    Congenital heart disease (CHD) is the most common type of birth defect, affecting about 1% of the population. MRI is an essential tool in the assessment of CHD, including diagnosis, intervention planning and follow-up. Three-dimensional MRI can provide particularly rich visualization and information. However, it is often complicated by long scan times, cardiorespiratory motion, injection of contrast agents, and complex and time-consuming postprocessing. This thesis comprises four pieces of work that attempt to respond to some of these challenges. The first piece of work aims to enable fast acquisition of 3D time-resolved cardiac imaging during free breathing. Rapid imaging was achieved using an efficient spiral sequence and a sparse parallel imaging reconstruction. The feasibility of this approach was demonstrated on a population of 10 patients with CHD, and areas of improvement were identified. The second piece of work is an integrated software tool designed to simplify and accelerate the development of machine learning (ML) applications in MRI research. It also exploits the strengths of recently developed ML libraries for efficient MR image reconstruction and processing. The third piece of work aims to reduce contrast dose in contrast-enhanced MR angiography (MRA). This would reduce risks and costs associated with contrast agents. A deep learning-based contrast enhancement technique was developed and shown to improve image quality in real low-dose MRA in a population of 40 children and adults with CHD. The fourth and final piece of work aims to simplify the creation of computational models for hemodynamic assessment of the great arteries. A deep learning technique for 3D segmentation of the aorta and the pulmonary arteries was developed and shown to enable accurate calculation of clinically relevant biomarkers in a population of 10 patients with CHD

    Statistical methods for sparse functional object data: elastic curves, shapes and densities

    Get PDF
    Many applications naturally yield data that can be viewed as elements in non-linear spaces. Consequently, there is a need for non-standard statistical methods capable of handling such data. The work presented here deals with the analysis of data in complex spaces derived from functional L2-spaces as quotient spaces (or subsets of such spaces). These data types include elastic curves represented as d-dimensional functions modulo re-parametrization, planar shapes represented as 2-dimensional functions modulo rotation, scaling and translation, and elastic planar shapes combining all of these invariances. Moreover, also probability densities can be thought of as non-negative functions modulo scaling. Since these functional object data spaces lack a natural Hilbert space structure, this work proposes specialized methods that integrate techniques from functional data analysis with those for metric and manifold data. In particular, but not exclusively, novel regression methods for specific metric quotient spaces are discussed. Special attention is given to handling discrete observations, since in practice curves and shapes are typically observed only as a discrete (often sparse or irregular) set of points. Similarly, density functions are usually not directly observed, but a (small) sample from the corresponding probability distribution is available. Overall, this work comprises six contributions that propose new methods for sparse functional object data and apply them to relevant real-world datasets, predominantly in a biomedical context

    Echocardiography

    Get PDF
    The book "Echocardiography - New Techniques" brings worldwide contributions from highly acclaimed clinical and imaging science investigators, and representatives from academic medical centers. Each chapter is designed and written to be accessible to those with a basic knowledge of echocardiography. Additionally, the chapters are meant to be stimulating and educational to the experts and investigators in the field of echocardiography. This book is aimed primarily at cardiology fellows on their basic echocardiography rotation, fellows in general internal medicine, radiology and emergency medicine, and experts in the arena of echocardiography. Over the last few decades, the rate of technological advancements has developed dramatically, resulting in new techniques and improved echocardiographic imaging. The authors of this book focused on presenting the most advanced techniques useful in today's research and in daily clinical practice. These advanced techniques are utilized in the detection of different cardiac pathologies in patients, in contributing to their clinical decision, as well as follow-up and outcome predictions. In addition to the advanced techniques covered, this book expounds upon several special pathologies with respect to the functions of echocardiography

    Contributions to the Completeness and Complementarity of Local Image Features

    Get PDF
    Tese de doutoramento em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraLocal image feature detection (or extraction, if we want to use a more semantically correct term) is a central and extremely active research topic in the field of computer vision. Reliable solutions to prominent problems such as matching, content-based image retrieval, object (class) recognition, and symmetry detection, often make use of local image features. It is widely accepted that a good local feature detector is the one that efficiently retrieves distinctive, accurate, and repeatable features in the presence of a wide variety of photometric and geometric transformations. However, these requirements are not always the most important. In fact, not all the applications require the same properties from a local feature detector. We can distinguish three broad categories of applications according to the required properties. The first category includes applications in which the semantic meaning of a particular type of features is exploited. For instance, edge or even ridge detection can be used to identify blood vessels in medical images or watercourses in aerial images. Another example in this category is the use of blob extraction to identify blob-like organisms in microscopic images. A second category includes tasks such as matching, tracking, and registration, which mainly require distinctive, repeatable, and accurate features. Finally, a third category comprises applications such as object (class) recognition, image retrieval, scene classification, and image compression. For this category, it is crucial that features preserve the most informative image content (robust image representation), while requirements such as repeatability and accuracy are of less importance. Our research work is mainly focused on the problem of providing a robust image representation through the use of local features. The limited number of types of features that a local feature extractor responds to might be insufficient to provide the so-called robust image representation. It is fundamental to analyze the completeness of local features, i.e., the amount of image information preserved by local features, as well as the often neglected complementarity between sets of features. The major contributions of this work come in the form of two substantially different local feature detectors aimed at providing considerably robust image representations. The first algorithm is an information theoretic-based keypoint extraction that responds to complementary local structures that are salient (highly informative) within the image context. This method represents a new paradigm in local feature extraction, as it introduces context-awareness principles. The second algorithm extracts Stable Salient Shapes, a novel type of regions, which are obtained through a feature-driven detection of Maximally Stable Extremal Regions (MSER). This method provides compact and robust image representations and overcomes some of the major shortcomings of MSER detection. We empirically validate the methods by investigating the repeatability, accuracy, completeness, and complementarity of the proposed features on standard benchmarks. Under these results, we discuss the applicability of both methods
    • …
    corecore