369 research outputs found

    Histogram analysis of the human brain MR images based on the S-function membership and Shannon's entropy function

    Get PDF
    The analysis of medical images for the purpose of computer-aided diagnosis and therapy planning includes segmentation as a preliminary stage for the visualization or quantification. In this paper, we present the first step in our fuzzy segmentation system that is capable of segmenting magnetic resonance (MR) images of a human brain. The histogram analysis based on the S-function membership and the Shannon's entropy function provides finding exact segmentation points. In the final stage, pixel classification is performed using the rule-based fuzzy logic inference. When the segmentation is complete, attributes of these classes may be determined (e.g., volumes), or the classes may be visualized as spatial objects. In contrast to other segmentation methods, like thresholding and region-based algorithms, our methods proceeds automatically and allow more exact delineation of the anatomical structures

    BERT & Family Eat Word Salad: Experiments with Text Understanding

    Full text link
    In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language. We define simple heuristics to construct such examples. Our experiments show that state-of-the-art models consistently fail to recognize them as ill-formed, and instead produce high confidence predictions on them. As a consequence of this phenomenon, models trained on sentences with randomly permuted word order perform close to state-of-the-art models. To alleviate these issues, we show that if models are explicitly trained to recognize invalid inputs, they can be robust to such attacks without a drop in performance.Comment: Accepted at AAAI 2021, Camera Ready Versio

    PURIFY: a new approach to radio-interferometric imaging

    Get PDF
    In a recent article series, the authors have promoted convex optimization algorithms for radio-interferometric imaging in the framework of compressed sensing, which leverages sparsity regularization priors for the associated inverse problem and defines a minimization problem for image reconstruction. This approach was shown, in theory and through simulations in a simple discrete visibility setting, to have the potential to outperform significantly CLEAN and its evolutions. In this work, we leverage the versatility of convex optimization in solving minimization problems to both handle realistic continuous visibilities and offer a highly parallelizable structure paving the way to significant acceleration of the reconstruction and high-dimensional data scalability. The new algorithmic structure promoted relies on the simultaneous-direction method of multipliers (SDMM), and contrasts with the current major-minor cycle structure of CLEAN and its evolutions, which in particular cannot handle the state-of-the-art minimization problems under consideration where neither the regularization term nor the data term are differentiable functions. We release a beta version of an SDMM-based imaging software written in C and dubbed PURIFY (http://basp-group.github.io/purify/) that handles various sparsity priors, including our recent average sparsity approach SARA. We evaluate the performance of different priors through simulations in the continuous visibility setting, confirming the superiority of SARA

    Low to medium level image processing for a mobile robot

    Get PDF
    The use of visual perception in autonomous mobile systems was approached with caution by mobile robot developers because of the high computational cost and huge memory requirements of most image processing operations. When used, the image processing is implemented on multiprocessors or complex and expensive systems, thereby requiring the robot to be wired or radio controlled from the computer system base

    Development of an automatic thresholding method for wake meandering studies and its application to the data set from scanning wind lidar

    Get PDF
    Wake meandering studies require knowledge of the instantaneous wake evolution. Scanning lidar data are used to identify the wind flow behind offshore wind turbines but do not immediately reveal the wake edges and centerline. The precise wake identification helps to build models predicting wake behavior. The conventional Gaussian fit methods are reliable in the near-wake area but lose precision with distance from the rotor and require good data resolution for an accurate fit. The thresholding methods, i.e., selection of a threshold that splits the data into background flow and wake, usually imply a fixed value or manual estimation, which hinders the wake identification on a large data set. We propose an automatic thresholding method for the wake shape and centerline detection, which is less dependent on the data resolution and quality and can also be applied to the image data. We show that the method performs reasonably well on large-eddy simulation data and apply it to the data set containing lidar measurements of the two wakes. Along with the wake identification, we use image processing statistics, such as entropy analysis, to filter and classify lidar scans. The automatic thresholding method and the subsequent centerline search algorithm are developed to reduce dependency on the supplementary data such as free-flow wind speed and direction. We focus on the technical aspect of the method and show that the wake shape and centerline found from the thresholded data are in a good agreement with the manually detected centerline and the Gaussian fit method. We also briefly discuss a potential application of the method to separate the near and far wakes and to estimate the wake direction.publishedVersio

    Modelagem de imagens têxteis com campos aleatórios markovianos

    Get PDF
    Orientador: Nancy Lopes GarciaDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: Quando novas tecnologias para tingimento de tecidos são desenvolvidas, avaliar a qualidade dessas técnicas envolve a medição da homogeneidade de cores resultante através de imagens. A presença da textura do tecido cria uma estrutura de dependência sofisticada para as cores dos pixels. A transformada de Fourier é utilizada juntamente com técnicas de regularização para separar o efeito da textura das imagens. As imagens são então modeladas por campos aleatórios de Markov ocultos e bases de Fourier como covariáveis possibilitando, assim, uma avaliação da homogeneidade de cores baseada em entropia, utilizando apenas a parte correspondente ao tingimentoAbstract: When new textile dyeing technologies are developed, evaluating the quality of these techniques involves measuring the resulting color homogeneity using digital images. The presence of a texture effect caused by the fabric creates a sophisticated dependence structure in pixels coloring. Fourier transform is used with regularization techniques to remove the texture signal from the image. Images are then modeled as a hidden Markov random field and Fourier basis as covariates, allowing an entropy-based evaluation of color homogeneity using the filtered signal that corresponds to the dyeing process onlyMestradoEstatisticaMestre em EstatísticaCAPE

    A simple DNA gate motif for synthesizing large-scale circuits

    Get PDF
    The prospects of programming molecular systems to perform complex autonomous tasks have motivated research into the design of synthetic biochemical circuits. Of particular interest to us are cell-free nucleic acid systems that exploit non-covalent hybridization and strand displacement reactions to create cascades that implement digital and analogue circuits. To date, circuits involving at most tens of gates have been demonstrated experimentally. Here, we propose a simple DNA gate architecture that appears suitable for practical synthesis of large-scale circuits involving possibly thousands of gates

    A Hierarchical Image Processing Approach for Diagnostic Analysis of Microcirculation Videos

    Get PDF
    Knowledge of the microcirculatory system has added significant value to the analysis of tissue oxygenation and perfusion. While developments in videomicroscopy technology have enabled medical researchers and physicians to observe the microvascular system, the available software tools are limited in their capabilities to determine quantitative features of microcirculation, either automatically or accurately. In particular, microvessel density has been a critical diagnostic measure in evaluating disease progression and a prognostic indicator in various clinical conditions. As a result, automated analysis of the microcirculatory system can be substantially beneficial in various real-time and off-line therapeutic medical applications, such as optimization of resuscitation. This study focuses on the development of an algorithm to automatically segment microvessels, calculate the density of capillaries in microcirculatory videos, and determine the distribution of blood circulation. The proposed technique is divided into four major steps: video stabilization, video enhancement, segmentation and post-processing. The stabilization step estimates motion and corrects for the motion artifacts using an appropriate motion model. Video enhancement improves the visual quality of video frames through preprocessing, vessel enhancement and edge enhancement. The resulting frames are combined through an adjusted weighted median filter and the resulting frame is then thresholded using an entropic thresholding technique. Finally, a region growing technique is utilized to correct for the discontinuity of blood vessels. Using the final binary results, the most commonly used measure for the assessment of microcirculation, i.e. Functional Capillary Density (FCD), is calculated. The designed technique is applied to video recordings of healthy and diseased human and animal samples obtained by MicroScan device based on Sidestream Dark Field (SDF) imaging modality. To validate the final results, the calculated FCD results are compared with the results obtained by blind detailed inspection of three medical experts, who have used AVA (Automated Vascular Analysis) semi-automated microcirculation analysis software. Since there is neither a fully automated accurate microcirculation analysis program, nor a publicly available annotated database of microcirculation videos, the results acquired by the experts are considered the gold standard. Bland-Altman plots show that there is ``Good Agreement between the results of the algorithm and that of gold standard. In summary, the main objective of this study is to eliminate the need for human interaction to edit/ correct results, to improve the accuracy of stabilization and segmentation, and to reduce the overall computation time. The proposed methodology impacts the field of computer science through development of image processing techniques to discover the knowledge in grayscale video frames. The broad impact of this work is to assist physicians, medical researchers and caregivers in making diagnostic and therapeutic decisions for microcirculatory abnormalities and in studying of the human microcirculation

    Automated image analysis for petrographic image assessments

    Get PDF
    In this thesis, the algorithms developed for an automated image analysis toolkit called PetrograFX for petrographic image assessments, particularly thin section images, are presented. These algorithms perform two main functions, porosity determination and quartz grain measurements. For porosity determination, the pore space is segmented using a seeded region growing scheme in color space where the seeds are generated automatically based on the absolute R - B differential image. The porosity is then derived by pixel-counting to identify the pore space regions. For quartz grain measurements, adaptive thresholding is applied to make the system robust to the color variations in the entire image for the segmentation of the quartz grains. Median filtering and blob analysis are used to remove lines of fluid inclusions, which appear as black speckles and spots, on the quartz grains before the subsequent measurement operations are performed. The distance transformation and watershed transformation are then performed to separate connected objects. A modified watershed transformation is developed to eliminate false watersheds based on the physical nature of quartz grains. Finally, the grain are characterized in terms of NSD, which is the nominal sectional diameter, NSD distribution and sorting
    corecore