130 research outputs found

    A fully automatic gridding method for cDNA microarray images

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Processing cDNA microarray images is a crucial step in gene expression analysis, since any errors in early stages affect subsequent steps, leading to possibly erroneous biological conclusions. When processing the underlying images, accurately separating the sub-grids and spots is extremely important for subsequent steps that include segmentation, quantification, normalization and clustering.</p> <p>Results</p> <p>We propose a parameterless and fully automatic approach that first detects the sub-grids given the entire microarray image, and then detects the locations of the spots in each sub-grid. The approach, first, detects and corrects rotations in the images by applying an affine transformation, followed by a polynomial-time optimal multi-level thresholding algorithm used to find the positions of the sub-grids in the image and the positions of the spots in each sub-grid. Additionally, a new validity index is proposed in order to find the correct number of sub-grids in the image, and the correct number of spots in each sub-grid. Moreover, a refinement procedure is used to correct possible misalignments and increase the accuracy of the method.</p> <p>Conclusions</p> <p>Extensive experiments on real-life microarray images and a comparison to other methods show that the proposed method performs these tasks fully automatically and with a very high degree of accuracy. Moreover, unlike previous methods, the proposed approach can be used in various type of microarray images with different resolutions and spot sizes and does not need any parameter to be adjusted.</p

    Novel pattern recognition approaches for transcriptomics data analysis

    Get PDF
    We proposed a family of methods for transcriptomics and genomics data analysis based on multi-level thresholding approach, such as OMTG for sub-grid and spot detection in DNA microarrays, and OMT for detecting significant regions based on next generation sequencing data. Extensive experiments on real-life datasets and a comparison to other methods show that the proposed methods perform these tasks fully automatically and with a very high degree of accuracy. Moreover, unlike previous methods, the proposed approaches can be used in various types of transcriptome analysis problems such as microarray image gridding with different resolutions and spot sizes as well as finding the interacting regions of DNA with a protein of interest using ChIP-Seq data without any need for parameter adjustment. We also developed constrained multi-level thresholding (CMT), an algorithm used to detect enriched regions on ChIP-Seq data with the ability of targeting regions within a specific range. We show that CMT has higher accuracy in detecting enriched regions (peaks) by objectively assessing its performance relative to other previously proposed peak finders. This is shown by testing three algorithms on the well-known FoxA1 Data set, four transcription factors (with a total of six antibodies) for Drosophila melanogaster and the H3K4ac antibody dataset. Finally, we propose a tree-based approach that conducts gene selection and builds a classifier simultaneously, in order to select the minimal number of genes that would reliably predict a given breast cancer subtype. Our results support that this modified approach to gene selection yields a small subset of genes that can predict subtypes with greater than 95%overall accuracy. In addition to providing a valuable list of targets for diagnostic purposes, the gene ontologies of the selected genes suggest that these methods have isolated a number of potential genes involved in breast cancer biology, etiology and potentially novel therapeutics

    Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    Get PDF
    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions

    Adaptive Scattered Data Fitting with Tensor Product Spline-Wavelets

    Get PDF
    The core of the work we present here is an algorithm that constructs a least squares approximation to a given set of unorganized points. The approximation is expressed as a linear combination of particular B-spline wavelets. It implies a multiresolution setting which constructs a hierarchy of approximations to the data with increasing level of detail, proceeding from coarsest to finest scales. It allows for an efficient selection of the degrees of freedom of the problem and avoids the introduction of an artificial uniform grid. In fact, an analysis of the data can be done at each of the scales of the hierarchy, which can be used to select adaptively a set of wavelets that can represent economically the characteristics of the cloud of points in the next level of detail. The data adaption of our method is twofold, as it takes into account both horizontal distribution and vertical irregularities of data. This strategy can lead to a striking reduction of the problem complexity. Furthermore, among the possible ways to achieve a multiscale formulation, the wavelet approach shows additional advantages, based on good conditioning properties and level-wise orthogonality. We exploit these features to enhance the efficiency of iterative solution methods for the system of normal equations of the problem. The combination of multiresolution adaptivity with the numerical properties of the wavelet basis gives rise to an algorithm well suited to cope with problems requiring fast solution methods. We illustrate this by means of numerical experiments that compare the performance of the method on various data sets working with different multi-resolution bases. Afterwards, we use the equivalence relation between wavelets and Besov spaces to formulate the problem of data fitting with regularization. We find that the multiscale formulation allows for a flexible and efficient treatment of some aspects of this problem. Moreover, we study the problem known as robust fitting, in which the data is assumed to be corrupted by wrong measurements or outliers. We compare classical methods based on re-weighting of residuals to our setting in which the wavelet representation of the data computed by our algorithm is used to locate the outliers. As a final application that couples two of the main applications of wavelets (data analysis and operator equations), we propose the use of this least squares data fitting method to evaluate the non-linear term in the wavelet-Galerkin formulation of non-linear PDE problems. At the end of this thesis we discuss efficient implementation issues, with a special interest in the interplay between solution methods and data structures

    A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    Get PDF
    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system

    A Comprehensive Overview of Computational Nuclei Segmentation Methods in Digital Pathology

    Full text link
    In the cancer diagnosis pipeline, digital pathology plays an instrumental role in the identification, staging, and grading of malignant areas on biopsy tissue specimens. High resolution histology images are subject to high variance in appearance, sourcing either from the acquisition devices or the H\&E staining process. Nuclei segmentation is an important task, as it detects the nuclei cells over background tissue and gives rise to the topology, size, and count of nuclei which are determinant factors for cancer detection. Yet, it is a fairly time consuming task for pathologists, with reportedly high subjectivity. Computer Aided Diagnosis (CAD) tools empowered by modern Artificial Intelligence (AI) models enable the automation of nuclei segmentation. This can reduce the subjectivity in analysis and reading time. This paper provides an extensive review, beginning from earlier works use traditional image processing techniques and reaching up to modern approaches following the Deep Learning (DL) paradigm. Our review also focuses on the weak supervision aspect of the problem, motivated by the fact that annotated data is scarce. At the end, the advantages of different models and types of supervision are thoroughly discussed. Furthermore, we try to extrapolate and envision how future research lines will potentially be, so as to minimize the need for labeled data while maintaining high performance. Future methods should emphasize efficient and explainable models with a transparent underlying process so that physicians can trust their output.Comment: 47 pages, 27 figures, 9 table

    Asynchronous Representation and Processing of Analog Sparse Signals Using a Time-Scale Framework

    Get PDF
    In this dissertation we investigate the problem of asynchronous representation and processing of analog sparse signals using a time-scale framework. Recently, in the design of signal representations the focus has been on the use of application-driven constraints for optimality purposes. Appearing in many fields such as neuroscience, implantable biomedical diagnostic devices, and sensor network applications, sparse or burst--like signals are of great interest. A common challenge in the representation of such signals is that they exhibit non--stationary behavior with frequency--varying spectra. By ignoring that the maximum frequency of their spectra is changing with time, uniformly sampling sparse signals collects samples in quiescent segments and results in high power dissipation. Also, continuous monitoring of signals challenges data acquisition, storage, and processing; especially if remote monitoring is desired, as this would require that a large number of samples be generated, stored and transmitted. Power consumption and the type of processing imposed by the size of the devices in the aforementioned applications has motivated the use of asynchronous approaches in our research. First, we work on establishing a new paradigm for the representation of analog sparse signals using a time-frequency representation. Second, we develop a scale-based signal decomposition framework which uses filter-bank structures for the representation-analysis-compression scheme of the sparse information. Using an asynchronous signal decomposition scheme leads to reduced computational requirements and lower power consumption; thus it is promising for hardware implementation. In addition, the proposed algorithm does not require prior knowledge of the bandwidth of the signal and the effect of noise can still be alleviated. Finally, we consider the synthesis step, where the target signal is reconstructed from compressed data. We implement a perfect reconstruction filter bank based on Slepian wavelets to use in the reconstruction of sparse signals from non--uniform samples. In this work, experiments on primary biomedical signal applications, such as electrocardiogram (EEG), swallowing signals and heart sound recordings have achieved significant improvements over traditional methods in the sensing and processing of sparse data. The results are also promising in applications including compression and denoising

    Characterization of Porosity Defects in Selectively Laser Melted IN718 and Ti- 6A1-4V via Synchrotron X-Ray Computed Tomography

    Get PDF
    Additive manufacturing (AM) is a method of fabrication involving the joining of feedstock material together to form a structure. Additive manufacturing has been developed for use with polymers, ceramics, composites, biomaterials, and metals. Of the metal additive manufacturing techniques, one of the most commonly employed for commercial and government applications is selective laser melting (SLM). SLM operates by using a high-powered laser to melt feedstock metal powder, layer by layer, until the desired near-net shape is completed. Due to the inherent function of AM and particularly SLM, it holds much promise in the ability to design parts without geometrical constraint, cost-effectively manufacture them, and reduce material waste. Because of this, SLM has gained traction in the aerospace, automotive, and medical device industries, which often use uniquely shaped parts for specific functions. These industries also have a tendency to use high performance metallic alloys that can withstand the sometimes-extreme operating conditions that the parts experience. Two alloys that are often used in these parts are Inconel 718 (IN718) and Ti-6Al-4V (Ti64). Both of these materials have been routinely used in SLM processing but have been often marked by porosity defects in the as-built state. Since large amounts of porosity is known to limit material mechanical performance, especially in fatigue life, there is a general need to inspect and quantify this material characteristic before part use in these industries. One of the most advanced porosity inspection methods is via X-ray computed tomography (CT). CT uses a detector to capture scattered X-rays after passing through the part. The detector images are then reconstructed to create a tomograph that can be analyzed using image processing techniques to visualize and quantify porosity. In this research, CT was performed on both materials at a 30 μm “low resolution” (LR) for different build orientations and processing conditions. Furthermore, a synchrotron beamline was used to conduct CT on small samples of the SLM IN718 and Ti64 specimens at a 0.65 μm “high resolution” (HR), which to the author’s knowledge is the highest resolution (for SLM IN718) and matches the highest resolution (for SLM Ti64) reported for porosity CT investigations of these materials. Tomographs were reconstructed using TomoPy 1.0.0, processed using ImageJ and Avizo 9.0.2, and quantified in Avizo and Matlab. Results showed a relatively low amount of porosity in the materials overall, but a several order of magnitude increase in quantifiable porosity volume fraction from LR to HR observations. Furthermore, quantifications and visualizations showed a propensity for more and larger pores to be present near the free surfaces of the specimens. Additionally, a plurality of pores in the HR samples were found to be in close proximity (10 μm or less) to each other
    corecore