432 research outputs found

    Superpixels: An Evaluation of the State-of-the-Art

    Full text link
    Superpixels group perceptually similar pixels to create visually meaningful entities while heavily reducing the number of primitives for subsequent processing steps. As of these properties, superpixel algorithms have received much attention since their naming in 2003. By today, publicly available superpixel algorithms have turned into standard tools in low-level vision. As such, and due to their quick adoption in a wide range of applications, appropriate benchmarks are crucial for algorithm selection and comparison. Until now, the rapidly growing number of algorithms as well as varying experimental setups hindered the development of a unifying benchmark. We present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms utilizing a benchmark focussing on fair comparison and designed to provide new insights relevant for applications. To this end, we explicitly discuss parameter optimization and the importance of strictly enforcing connectivity. Furthermore, by extending well-known metrics, we are able to summarize algorithm performance independent of the number of generated superpixels, thereby overcoming a major limitation of available benchmarks. Furthermore, we discuss runtime, robustness against noise, blur and affine transformations, implementation details as well as aspects of visual quality. Finally, we present an overall ranking of superpixel algorithms which redefines the state-of-the-art and enables researchers to easily select appropriate algorithms and the corresponding implementations which themselves are made publicly available as part of our benchmark at davidstutz.de/projects/superpixel-benchmark/

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Approches tomographiques structurelles pour l'analyse du milieu urbain par tomographie SAR THR : TomoSAR

    No full text
    SAR tomography consists in exploiting multiple images from the same area acquired from a slightly different angle to retrieve the 3-D distribution of the complex reflectivity on the ground. As the transmitted waves are coherent, the desired spatial information (along with the vertical axis) is coded in the phase of the pixels. Many methods have been proposed to retrieve this information in the past years. However, the natural redundancies of the scene are generally not exploited to improve the tomographic estimation step. This Ph.D. presents new approaches to regularize the estimated reflectivity density obtained through SAR tomography by exploiting the urban geometrical structures.La tomographie SAR exploite plusieurs acquisitions d'une mĂȘme zone acquises d'un point de vue lĂ©gerement diffĂ©rent pour reconstruire la densitĂ© complexe de rĂ©flectivitĂ© au sol. Cette technique d'imagerie s'appuyant sur l'Ă©mission et la rĂ©ception d'ondes Ă©lectromagnĂ©tiques cohĂ©rentes, les donnĂ©es analysĂ©es sont complexes et l'information spatiale manquante (selon la verticale) est codĂ©e dans la phase. De nombreuse mĂ©thodes ont pu ĂȘtre proposĂ©es pour retrouver cette information. L'utilisation des redondances naturelles Ă  certains milieux n'est toutefois gĂ©nĂ©ralement pas exploitĂ©e pour amĂ©liorer l'estimation tomographique. Cette thĂšse propose d'utiliser l'information structurelle propre aux structures urbaines pour rĂ©gulariser les densitĂ©s de rĂ©flecteurs obtenues par cette technique

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models

    Nephroblastoma in MRI Data

    Get PDF
    The main objective of this work is the mathematical analysis of nephroblastoma in MRI sequences. At the beginning we provide two different datasets for segmentation and classification. Based on the first dataset, we analyze the current clinical practice regarding therapy planning on the basis of annotations of a single radiologist. We can show with our benchmark that this approach is not optimal and that there may be significant differences between human annotators and even radiologists. In addition, we demonstrate that the approximation of the tumor shape currently used is too coarse granular and thus prone to errors. We address this problem and develop a method for interactive segmentation that allows an intuitive and accurate annotation of the tumor. While the first part of this thesis is mainly concerned with the segmentation of Wilms’ tumors, the second part deals with the reliability of diagnosis and the planning of the course of therapy. The second data set we compiled allows us to develop a method that dramatically improves the differential diagnosis between nephroblastoma and its precursor lesion nephroblastomatosis. Finally, we can show that even the standard MRI modality for Wilms’ tumors is sufficient to estimate the developmental tendencies of nephroblastoma under chemotherapy

    Segmentation and quantification of spinal cord gray matter–white matter structures in magnetic resonance images

    Get PDF
    This thesis focuses on finding ways to differentiate the gray matter (GM) and white matter (WM) in magnetic resonance (MR) images of the human spinal cord (SC). The aim of this project is to quantify tissue loss in these compartments to study their implications on the progression of multiple sclerosis (MS). To this end, we propose segmentation algorithms that we evaluated on MR images of healthy volunteers. Segmentation of GM and WM in MR images can be done manually by human experts, but manual segmentation is tedious and prone to intra- and inter-rater variability. Therefore, a deterministic automation of this task is necessary. On axial 2D images acquired with a recently proposed MR sequence, called AMIRA, we experiment with various automatic segmentation algorithms. We first use variational model-based segmentation approaches combined with appearance models and later directly apply supervised deep learning to train segmentation networks. Evaluation of the proposed methods shows accurate and precise results, which are on par with manual segmentations. We test the developed deep learning approach on images of conventional MR sequences in the context of a GM segmentation challenge, resulting in superior performance compared to the other competing methods. To further assess the quality of the AMIRA sequence, we apply an already published GM segmentation algorithm to our data, yielding higher accuracy than the same algorithm achieves on images of conventional MR sequences. On a different topic, but related to segmentation, we develop a high-order slice interpolation method to address the large slice distances of images acquired with the AMIRA protocol at different vertebral levels, enabling us to resample our data to intermediate slice positions. From the methodical point of view, this work provides an introduction to computer vision, a mathematically focused perspective on variational segmentation approaches and supervised deep learning, as well as a brief overview of the underlying project's anatomical and medical background

    Image Segmentation of Bacterial Cells in Biofilms

    Get PDF
    Bacterial biofilms are three-dimensional cell communities that live embedded in a self-produced extracellular matrix. Due to the protective properties of the dense coexistence of microorganisms, single bacteria inside the communities are hard to eradicate by antibacterial agents and bacteriophages. This increased resilience gives rise to severe problems in medical and technological settings. To fight the bacterial cells, an in-detail understanding of the underlying mechanisms of biofilm formation and development is required. Due to spatio-temporal variances in environmental conditions inside a single biofilm, the mechanisms can only be investigated by probing single-cells at different locations over time. Currently, the mechanistic information is primarily encoded in volumetric image data gathered with confocal fluorescence microscopy. To quantify features of the single-cell behaviour, single objects need to be detected. This identification of objects inside biofilm image data is called segmentation and is a key step for the understanding of the biological processes inside biofilms. In the first part of this work, a user-friendly computer program is presented which simplifies the analysis of bacterial biofilms. It provides a comprehensive set of tools to segment, analyse, and visualize fluorescent microscopy data without writing a single line of analysis code. This allows for faster feedback loops between experiment and analysis, and allows fast insights into the gathered data. The single-cell segmentation accuracy of a recent segmentation algorithm is discussed in detail. In this discussion, points for improvements are identified and a new optimized segmentation approach presented. The improved algorithm achieves superior segmentation accuracy on bacterial biofilms when compared to the current state-of-the-art algorithms. Finally, the possibility of deep learning-based end-to-end segmentation of biofilm data is investigated. A method for the quick generation of training data is presented and the results of two single-cell segmentation approaches for eukaryotic cells are adapted for the segmentation of bacterial biofilm segmentation.Bakterielle Biofilme sind drei-dimensionale Zellcluster, welche ihre eigene Matrix produzieren. Die selbst-produzierte Matrix bietet den Zellen einen gemeinschaftlichen Schutz vor Ă€ußeren Stressfaktoren. Diese Stressfaktoren können abiotischer Natur sein wie z.B. Temperatur- und NĂ€hrstoff\- schwankungen, oder aber auch biotische Faktoren wie z.B. Antibiotikabehandlung oder Bakteriophageninfektionen. Dies fĂŒhrt dazu, dass einzelne Zelle innerhalb der mikrobiologischen Gemeinschaften eine erhöhte WiderstandsfĂ€higkeit aufweisen und eine große Herausforderung fĂŒr Medizin und technische Anwendungen darstellen. Um Biofilme wirksam zu bekĂ€mpfen, muss man die dem Wachstum und Entwicklung zugrundeliegenden Mechanismen entschlĂŒsseln. Aufgrund der hohen Zelldichte innerhalb der Gemeinschaften sind die Mechanismen nicht rĂ€umlich und zeitlich invariant, sondern hĂ€ngen z.B. von Metabolit-, NĂ€hrstoff- und Sauerstoffgradienten ab. Daher ist es fĂŒr die Beschreibung unabdingbar Beobachtungen auf Einzelzellebene durchzufĂŒhren. FĂŒr die nicht-invasive Untersuchung von einzelnen Zellen innerhalb eines Biofilms ist man auf konfokale Fluoreszenzmikroskopie angewiesen. Um aus den gesammelten, drei-dimensionalen Bilddaten Zelleigenschaften zu extrahieren, ist die Erkennung von den jeweiligen Zellen erforderlich. Besonders die digitale Rekonstruktion der Zellmorphologie spielt dabei eine große Rolle. Diese erhĂ€lt man ĂŒber die Segmentierung der Bilddaten. Dabei werden einzelne Bildelemente den abgebildeten Objekten zugeordnet. Damit lassen sich die einzelnen Objekte voneinander unterscheiden und deren Eigenschaften extrahieren. Im ersten Teil dieser Arbeit wird ein benutzerfreundliches Computerprogramm vorgestellt, welches die Segmentierung und Analyse von Fluoreszenzmikroskopiedaten wesentlich vereinfacht. Es stellt eine umfangreiche Auswahl an traditionellen Segmentieralgorithmen, Parameterberechnungen und Visualisierungsmöglichkeiten zur VerfĂŒgung. Alle Funktionen sind ohne Programmierkenntnisse zugĂ€nglich, sodass sie einer großen Gruppe von Benutzern zur VerfĂŒgung stehen. Die implementierten Funktionen ermöglichen es die Zeit zwischen durchgefĂŒhrtem Experiment und vollendeter Datenanalyse signifikant zu verkĂŒrzen. Durch eine schnelle Abfolge von stetig angepassten Experimenten können in kurzer Zeit schnell wissenschaftliche Einblicke in Biofilme gewonnen werden.\\ Als ErgĂ€nzung zu den bestehenden Verfahren zur Einzelzellsegmentierung in Biofilmen, wird eine Verbesserung vorgestellt, welche die Genauigkeit von bisherigen Filter-basierten Algorithmen ĂŒbertrifft und einen weiteren Schritt in Richtung von zeitlich und rĂ€umlich aufgelöster Einzelzellverfolgung innerhalb bakteriellen Biofilme darstellt. Abschließend wird die Möglichkeit der Anwendung von Deep Learning Algorithmen fĂŒr die Segmentierung in Biofilmen evaluiert. Dazu wird eine Methode vorgestellt welche den Annotationsaufwand von Trainingsdaten im Vergleich zu einer vollstĂ€ndig manuellen Annotation drastisch verkĂŒrzt. Die erstellten Daten werden fĂŒr das Training von Algorithmen eingesetzt und die Genauigkeit der Segmentierung an experimentellen Daten untersucht
    • 

    corecore