363 research outputs found

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    Interactive Constrained {B}oolean Matrix Factorization

    No full text

    A Model For e-Government Digital Document

    Get PDF
    The presence of a great amount of information is typical of bureaucratic processes, like the ones pertaining to public and private administrations. Such information is often recorded on paper or in different digital formats and its management is very expensive, both in terms of space used for storing documents and in terms of time spent in searching for the documents of interest. Furthermore, the manual management of these documents is absolutely not error-free. To efficiently access the information contained in very large document repositories, such as public administration archives, techniques for syntactic and semantic document management are required, so to ensure a large and intense process of document dematerialization, and eliminate, or at least reduce, the quantity of paper documents. In this work we present a novel RDF model of digital documents for improving the dematerialization effectiveness, that constitutes the starting point of an information system able to manage documental streams in the most efficient way. Such model takes into account the important need that is required in several E-Government applications which, depending on authorities or final users or time, provides different representations of the same multimedia contents
    • …
    corecore