193 research outputs found

    Color Texture Classification Approach Based on Combination of Primitive Pattern Units and Statistical Features

    Full text link
    Texture classification became one of the problems which has been paid much attention on by image processing scientists since late 80s. Consequently, since now many different methods have been proposed to solve this problem. In most of these methods the researchers attempted to describe and discriminate textures based on linear and non-linear patterns. The linear and non-linear patterns on any window are based on formation of Grain Components in a particular order. Grain component is a primitive unit of morphology that most meaningful information often appears in the form of occurrence of that. The approach which is proposed in this paper could analyze the texture based on its grain components and then by making grain components histogram and extracting statistical features from that would classify the textures. Finally, to increase the accuracy of classification, proposed approach is expanded to color images to utilize the ability of approach in analyzing each RGB channels, individually. Although, this approach is a general one and it could be used in different applications, the method has been tested on the stone texture and the results can prove the quality of approach.Comment: The International Journal of Multimedia & Its Applications (IJMA) Vol.3, No.3, August 201

    Discrete Optimization in Early Vision - Model Tractability Versus Fidelity

    Get PDF
    Early vision is the process occurring before any semantic interpretation of an image takes place. Motion estimation, object segmentation and detection are all parts of early vision, but recognition is not. Some models in early vision are easy to perform inference with---they are tractable. Others describe the reality well---they have high fidelity. This thesis improves the tractability-fidelity trade-off of the current state of the art by introducing new discrete methods for image segmentation and other problems of early vision. The first part studies pseudo-boolean optimization, both from a theoretical perspective as well as a practical one by introducing new algorithms. The main result is the generalization of the roof duality concept to polynomials of higher degree than two. Another focus is parallelization; discrete optimization methods for multi-core processors, computer clusters, and graphical processing units are presented. Remaining in an image segmentation context, the second part studies parametric problems where a set of model parameters and a segmentation are estimated simultaneously. For a small number of parameters these problems can still be optimally solved. One application is an optimal method for solving the two-phase Mumford-Shah functional. The third part shifts the focus to curvature regularization---where the commonly used length and area penalization is replaced by curvature in two and three dimensions. These problems can be discretized over a mesh and special attention is given to the mesh geometry. Specifically, hexagonal meshes in the plane are compared to square ones and a method for generating adaptive meshes is introduced and evaluated. The framework is then extended to curvature regularization of surfaces. Finally, the thesis is concluded by three applications to early vision problems: cardiac MRI segmentation, image registration, and cell classification

    Advanced photonic and electronic systems - WILGA 2017

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers more than 350 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET by PAN and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2017 was the XL edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2017 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445

    Artificial intelligence in sickle disease

    Get PDF
    Artificial intelligence (AI) is rapidly becoming an established arm in medical sciences and clinical practice in numerous medical fields. Its implications have been rising and are being widely used in research, diagnostics, and treatment options for many pathologies, including sickle cell disease (SCD). AI has started new ways to improve risk stratification and diagnosing SCD complications early, allowing rapid intervention and reallocation of resources to high-risk patients. We reviewed the literature for established and new AI applications that may enhance management of SCD through advancements in diagnosing SCD and its complications, risk stratification, and the effect of AI in establishing an individualized approach in managing SCD patients in the future. Aim: to review the benefits and drawbacks of resources utilizing AI in clinical practice for improving the management for SCD cases.Open Access funding provided by the Qatar National Library.Scopu

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Reverse engineering of biological signaling networks via integration of data and knowledge using probabilistic graphical models

    Get PDF
    Motivation The postulate that biological molecules rather act together in intricate networks, pioneered systems biology and popularized the study on approaches to reconstruct and understand these networks. These networks give an insight of the underlying biological process and diseases involving aberration in these pathways like, cancer and neuro degenerative diseases. These networks can be reconstructed by two different approaches namely, data driven and knowledge driven methods. This leaves a critical question of relying on either of them. Relying completely on data driven approaches brings in the issue of overfitting, whereas, an entirely knowledge driven approach leaves us without acquisition of any new information/knowledge. This thesis presents hybrid approach in terms of integration of high throughput data and biological knowledge to reverse-engineer the structure of biological networks in a probabilistic way and showcases the improvement brought about as a result. Accomplishments The current work aims to learn networks from perturbation data. It extends the existing Nested Effects Model (NEMs) for pathway reconstruction in order to use the time course data, allowing the differentiation between direct and indirect effects and resolve feedback loops. The thesis also introduces an approach to learn the signaling network from phenotype data in form of images/movie, widening the scope of NEMs, which was so far limited to gene expression data. Furthermore, the thesis introduces methodologies to integrate knoowledge from different existing sources as probabilistic prior that improved the reconstruction accuracy of the network and could make it biologically more rational. These methods were finally integrated and for reverse engineering of more accurate and realistic networks. Conclusion The thesis added three dimensions to existing scope of network reverse engineering specially Nested Effects Models in terms of use of time course data, phenotype data and finally the incorporation of prior biological knowledge from multiple sources. The approaches developed demonstrate their application to understand signaling in stem cells and cell division and breast cancer. Furthermore the integrative approach shows the reconstruction of AMPK/EGFR pathway that is used to identify potential drug targets in lung cancer which were also validated experimentally, meeting one of the desired goals in systems biology

    Fast algorithm for real-time rings reconstruction

    Get PDF
    The GAP project is dedicated to study the application of GPU in several contexts in which real-time response is important to take decisions. The definition of real-time depends on the application under study, ranging from answer time of ÎĽs up to several hours in case of very computing intensive task. During this conference we presented our work in low level triggers [1] [2] and high level triggers [3] in high energy physics experiments, and specific application for nuclear magnetic resonance (NMR) [4] [5] and cone-beam CT [6]. Apart from the study of dedicated solution to decrease the latency due to data transport and preparation, the computing algorithms play an essential role in any GPU application. In this contribution, we show an original algorithm developed for triggers application, to accelerate the ring reconstruction in RICH detector when it is not possible to have seeds for reconstruction from external trackers

    High Throughput Software for Powder Diffraction and its Application to Heterogeneous Catalysis

    Full text link
    In this thesis we investigate high throughput computational methods for processing large quantities of data collected from synchrotrons and their application to spectral analysis of powder diffraction data. We also present the main product of this PhD programme, specifically a software called 'EasyDD' developed by the author. This software was created to meet the increasing demand on data processing and analysis capabilities as required by modern detectors which produce huge quantities of data. Modern detectors coupled with the high intensity X-ray sources available at synchrotrons have led to the situation where datasets can be collected in ever shorter time scales and in ever larger numbers. Such large volumes of datasets pose a data processing bottleneck which augments with current and future instrument development. EasyDD has achieved its objectives and made significant contributions to scientific research. It can also be used as a model for more mature attempts in the future. EasyDD is currently in use by a number of researchers in a number of academic and research institutions to process high-energy diffraction data. These include data collected by different techniques such as Energy Dispersive Diffraction, Angle Dispersive Diffraction and Computer Aided Tomography. EasyDD has already been used in a number of published studies, and is currently in use by the High Energy X-Ray Imaging Technology project. The software was also used by the author to process and analyse datasets collected from synchrotron radiation facilities. In this regard, the thesis presents novel scientific research involving the use of EasyDD to handle large diffraction datasets in the study of alumina-supported metal oxide catalyst bodies. These data were collected using Tomographic Energy Dispersive Diffraction Imaging and Computer Aided Tomography techniques.Comment: thesis, 202 pages, 95 figures, 6 table
    • …
    corecore