304 research outputs found

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Homologous Recombination under the Single-Molecule Fluorescence Microscope

    Get PDF
    Homologous recombination (HR) is a complex biological process and is central to meiosis and for repair of DNA double-strand breaks. Although the HR process has been the subject of intensive study for more than three decades, the complex protein–protein and protein–DNA interactions during HR present a significant challenge for determining the molecular mechanism(s) of the process. This knowledge gap is largely because of the dynamic interactions between HR proteins and DNA which is difficult to capture by routine biochemical or structural biology methods. In recent years, single-molecule fluorescence microscopy has been a popular method in the field of HR to visualize these complex and dynamic interactions at high spatiotemporal resolution, revealing mechanistic insights of the process. In this review, we describe recent efforts that employ single-molecule fluorescence microscopy to investigate protein–protein and protein–DNA interactions operating on three key DNA-substrates: single-stranded DNA (ssDNA), double-stranded DNA (dsDNA), and four-way DNA called Holliday junction (HJ). We also outline the technological advances and several key insights revealed by these studies in terms of protein assembly on these DNA substrates and highlight the foreseeable promise of single-molecule fluorescence microscopy in advancing our understanding of homologous recombination

    TPU Cloud-Based Generalized U-Net for Eye Fundus Image Segmentation

    Get PDF
    Medical images from different clinics are acquired with different instruments and settings. To perform segmentation on these images as a cloud-based service we need to train with multiple datasets to increase the segmentation independency from the source. We also require an ef cient and fast segmentation network. In this work these two problems, which are essential for many practical medical imaging applications, are studied. As a segmentation network, U-Net has been selected. U-Net is a class of deep neural networks which have been shown to be effective for medical image segmentation. Many different U-Net implementations have been proposed.With the recent development of tensor processing units (TPU), the execution times of these algorithms can be drastically reduced. This makes them attractive for cloud services. In this paper, we study, using Google's publicly available colab environment, a generalized fully con gurable Keras U-Net implementation which uses Google TPU processors for training and prediction. As our application problem, we use the segmentation of Optic Disc and Cup, which can be applied to glaucoma detection. To obtain networks with a good performance, independently of the image acquisition source, we combine multiple publicly available datasets (RIM-One V3, DRISHTI and DRIONS). As a result of this study, we have developed a set of functions that allow the implementation of generalized U-Nets adapted to TPU execution and are suitable for cloud-based service implementation.Ministerio de Economía y Competitividad TEC2016-77785-

    An Architecture for Dynamic Meta-Level Process Control for Model-Based Troubleshooting

    Get PDF
    There are numerous methods used for troubleshooting devices. Each method has certain domains, knowledge requirements, and assumptions required for it to perform well. However, oftentimes no one method by itself is sufficient to completely solve a troubleshooting problem. Therefore, an architecture is required to control the combined use of many problem solving methods. The combination of multiple problem solving methods makes the troubleshooting process more robust in terms of device domains that can be dealt with and quality of diagnoses produced. Troubleshooting has two tasks: diagnosis and problem resolution. This research provides an architecture that allows dynamic method selection during diagnosis. Dynamic method selection factors the current state of the diagnosis process along with other method parameters to determine which method to use to advance the diagnosis process. The architecture was developed by combining themes from diagnosis research that focused on dynamic multimethod diagnosis and its control. This work has produced several results. It provides an architecture to organize the methods and a basis for making control decisions concerning method use during diagnosis. It identifies a generous number of methods useful to perform diagnosis. It identifies the knowledge these methods require

    Domains and naïve theories

    Full text link
    Human cognition entails domain‐specific cognitive processes that influence memory, attention, categorization, problem‐solving, reasoning, and knowledge organization. This article examines domain‐specific causal theories, which are of particular interest for permitting an examination of how knowledge structures change over time. We first describe the properties of commonsense theories, and how commonsense theories differ from scientific theories, illustrating with children's classification of biological and nonbiological kinds. We next consider the implications of domain‐specificity for broader issues regarding cognitive development and conceptual change. We then examine the extent to which domain‐specific theories interact, and how people reconcile competing causal frameworks. Future directions for research include examining how different content domains interact, the nature of theory change, the role of context (including culture, language, and social interaction) in inducing different frameworks, and the neural bases for domain‐specific reasoning. WIREs Cogni Sci 2011 2 490–502 DOI: 10.1002/wcs.124 For further resources related to this article, please visit the WIREs websitePeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87128/1/124_ftp.pd

    Recommending on graphs: a comprehensive review from a data perspective

    Full text link
    Recent advances in graph-based learning approaches have demonstrated their effectiveness in modelling users' preferences and items' characteristics for Recommender Systems (RSS). Most of the data in RSS can be organized into graphs where various objects (e.g., users, items, and attributes) are explicitly or implicitly connected and influence each other via various relations. Such a graph-based organization brings benefits to exploiting potential properties in graph learning (e.g., random walk and network embedding) techniques to enrich the representations of the user and item nodes, which is an essential factor for successful recommendations. In this paper, we provide a comprehensive survey of Graph Learning-based Recommender Systems (GLRSs). Specifically, we start from a data-driven perspective to systematically categorize various graphs in GLRSs and analyze their characteristics. Then, we discuss the state-of-the-art frameworks with a focus on the graph learning module and how they address practical recommendation challenges such as scalability, fairness, diversity, explainability and so on. Finally, we share some potential research directions in this rapidly growing area.Comment: Accepted by UMUA

    Inferring Different Types of Lindenmayer Systems Using Artificial Intelligence

    Get PDF
    Lindenmayer systems (L-systems) are a formal grammar system which consist of a set of rewriting rules. Each rewriting rule is comprised of a symbol to replace (predecessor), a replacement string (successor), and an optional condition that is necessary for replacement. Starting with an initial string, every symbol in the string is replaced in parallel in accordance with the conditions on the rewriting rules, to produce a new string. The replacement process iterates as needed to produce a sequence of strings. There are different types of L-systems, which allow for different types of conditions, and methods of selecting the rules to apply. Some symbols of the alphabet can be interpreted as instructions for simulation software towards process modelling, where each string describes another step of the simulated process. Typically, creating an L-system for a specific process is done by experts by making meticulous measurements and using a priori knowledge about the process. It would be desirable to have a method to automatically learn the L-systems (the simulation program) from data, such as from a temporal sequence of images. This thesis presents a suite of tools, collectively called the Plant Model Inference Tools or PMIT (despite the name, the tools are domain agnostic), for inferring different types of L-systems using only a sequence of strings describing the process over some initial time period. Variants of PMIT are created for deterministic context-free L-systems, stochastic L-systems, and parametric L-systems. They are each evaluated using existing known deterministic and parametric L-systems from the literature, and procedurally generated stochastic L-systems. Accuracy can be detected in various ways, such as checking whether the inferred L-system is equal to the original one. PMIT is able to correctly infer deterministic L-systems with up to 31 symbols in the alphabet compared to the previous state-of-the-art algorithm's limit of 2 symbols. Stochastic L-systems allow symbols in the alphabet to have multiple rewriting rules each with an associated probability of being selected. Evaluating stochastic L-system inference with 960 procedurally generated L-systems with multiple sequences of strings as input found the following: 1) when 3 input sequences are used, the inferred successors always matched the original successors for systems with up to 9 rewriting rules, 2) when 6 sequences of strings are used, the difference between the associated probabilities of the inferred and the original L-system is approximately 1%. Parametric L-systems allow symbols to have multiple rewriting rules with parameters that get passed during rewriting. Rule selection is based on an associated Boolean condition over the parameters that gets evaluated to choose the rule to be applied. Inference is done in two steps. In the first step, the successors are inferred, and in the second step, appropriate Boolean conditions are found. Parametric L-system inference was evaluated on 20 known parametric L-systems. For 18 of the 20 L-systems where all successors were non-empty, the successors were correctly identified, but the time taken was up to 26 days on a single core CPU for the largest L-system. The second step, inferring the Boolean conditions, was successful for all 20 systems in the test set. No previous algorithm from the literature had implemented stochastic or parametric L-system inference. Inferring L-systems of greater complexity algorithmically can save considerable time and effort versus constructing them manually; however, perhaps more importantly rather than relying on existing knowledge, inferring a simulation of a process from data can help reveal the underlying scientific principles of the process
    corecore