17,716 research outputs found
NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation
Complex computational models are often designed to simulate real-world
physical phenomena in many scientific disciplines. However, these simulation
models tend to be computationally very expensive and involve a large number of
simulation input parameters which need to be analyzed and properly calibrated
before the models can be applied for real scientific studies. We propose a
visual analysis system to facilitate interactive exploratory analysis of
high-dimensional input parameter space for a complex yeast cell polarization
simulation. The proposed system can assist the computational biologists, who
designed the simulation model, to visually calibrate the input parameters by
modifying the parameter values and immediately visualizing the predicted
simulation outcome without having the need to run the original expensive
simulation for every instance. Our proposed visual analysis system is driven by
a trained neural network-based surrogate model as the backend analysis
framework. Surrogate models are widely used in the field of simulation sciences
to efficiently analyze computationally expensive simulation models. In this
work, we demonstrate the advantage of using neural networks as surrogate models
for visual analysis by incorporating some of the recent advances in the field
of uncertainty quantification, interpretability and explainability of neural
network-based models. We utilize the trained network to perform interactive
parameter sensitivity analysis of the original simulation at multiple
levels-of-detail as well as recommend optimal parameter configurations using
the activation maximization framework of neural networks. We also facilitate
detail analysis of the trained network to extract useful insights about the
simulation model, learned by the network, during the training process.Comment: Published at IEEE Transactions on Visualization and Computer Graphic
A survey on utilization of data mining approaches for dermatological (skin) diseases prediction
Due to recent technology advances, large volumes of medical data is obtained. These data contain valuable information. Therefore data mining techniques can be used to extract useful patterns. This paper is intended to introduce data mining and its various techniques and a survey of the available literature on medical data mining. We emphasize mainly on the application of data mining on skin diseases. A categorization has been provided based on the different data mining techniques. The utility of the various data mining methodologies is highlighted. Generally association mining is suitable for extracting rules. It has been used especially in cancer diagnosis. Classification is a robust method in medical mining. In this paper, we have summarized the different uses of classification in dermatology. It is one of the most important methods for diagnosis of erythemato-squamous diseases. There are different methods like Neural Networks, Genetic Algorithms and fuzzy classifiaction in this topic. Clustering is a useful method in medical images mining. The purpose of clustering techniques is to find a structure for the given data by finding similarities between data according to data characteristics. Clustering has some applications in dermatology. Besides introducing different mining methods, we have investigated some challenges which exist in mining skin data
Recommended from our members
A Spatio-Temporal Bayesian Network Classifier for Understanding Visual Field Deterioration
Progressive loss of the field of vision is characteristic of a number of eye diseases
such as glaucoma which is a leading cause of irreversible blindness in the world. Recently,
there has been an explosion in the amount of data being stored on patients who suffer from visual deterioration including field test data, retinal image data and patient demographic data. However, there has been relatively little work in modelling
the spatial and temporal relationships common to such data. In this paper we introduce a novel method for classifying Visual Field (VF) data that explicitly models these spatial and temporal relationships. We carry out an analysis of this
method and compare it to a number of classifiers from the machine learning and statistical communities. Results are very encouraging showing that our classifiers are comparable to existing statistical models whilst also facilitating the understanding of underlying spatial and temporal relationships within VF data. The results
reveal the potential of using such models for knowledge discovery within ophthalmic databases, such as networks reflecting the ‘nasal step’, an early indicator of the onset of glaucoma. The results outlined in this paper pave the way for a substantial program of study involving many other spatial and temporal datasets, including retinal image and clinical data
A network inference method for large-scale unsupervised identification of novel drug-drug interactions
Characterizing interactions between drugs is important to avoid potentially
harmful combinations, to reduce off-target effects of treatments and to fight
antibiotic resistant pathogens, among others. Here we present a network
inference algorithm to predict uncharacterized drug-drug interactions. Our
algorithm takes, as its only input, sets of previously reported interactions,
and does not require any pharmacological or biochemical information about the
drugs, their targets or their mechanisms of action. Because the models we use
are abstract, our approach can deal with adverse interactions,
synergistic/antagonistic/suppressing interactions, or any other type of drug
interaction. We show that our method is able to accurately predict
interactions, both in exhaustive pairwise interaction data between small sets
of drugs, and in large-scale databases. We also demonstrate that our algorithm
can be used efficiently to discover interactions of new drugs as part of the
drug discovery process
Inferring Anomalies from Data using Bayesian Networks
Existing studies on data mining has largely focused on the design of measures and algorithms to identify outliers in large and high dimensional categorical and numeric databases. However, not much stress has been given on the interestingness of the reported outlier. One way to ascertain interestingness and usefulness of the reported outlier is by making use of domain knowledge. In this thesis, we present measures to discover outliers based on background knowledge, represented by a Bayesian network. Using causal relationships between attributes encoded in the Bayesian framework, we demonstrate that meaningful outliers, i.e., outliers which encode important or new information are those which violate causal relationships encoded in the model. Depending upon nature of data, several approaches are proposed to identify and explain anomalies using Bayesian knowledge. Outliers are often identified as data points which are ``rare'', ''isolated'', or ''far away from their nearest neighbors''. We show that these characteristics may not be an accurate way of describing interesting outliers. Through a critical analysis on several existing outlier detection techniques, we show why there is a mismatch between outliers as entities described by these characteristics and ``real'' outliers as identified using Bayesian approach. We show that the Bayesian approaches presented in this thesis has better accuracy in mining genuine outliers while, keeping a low false positive rate as compared to traditional outlier detection techniques
Trust Strategies for the Semantic Web
Everyone agrees on the importance of enabling trust on the SemanticWebto ensure more efficient agent interaction. Current research on trust seems to focus on developing computational models, semantic representations, inference techniques, etc. However, little attention has been given to the plausible trust strategies or tactics that an agent can follow when interacting with other agents on the Semantic Web. In this paper we identify five most common strategies of trust and discuss their envisaged costs and benefits. The aim is to provide some guidelines to help system developers appreciate the risks and gains involved with each trust strategy
Detection of Extrasolar Planets by Gravitational Microlensing
Gravitational microlensing provides a unique window on the properties and
prevalence of extrasolar planetary systems because of its ability to find
low-mass planets at separations of a few AU. The early evidence from
microlensing indicates that the most common type of exoplanet yet detected are
the so-called "super-Earth" planets of ~10 Earth-masses at a separation of a
few AU from their host stars. The detection of two such planets indicates that
roughly one third of stars have such planets in the separation range 1.5-4 AU,
which is about an order of magnitude larger than the prevalence of gas-giant
planets at these separations. We review the basic physics of the microlensing
method, and show why this method allows the detection of Earth-mass planets at
separations of 2-3 AU with ground-based observations. We explore the conditions
that allow the detection of the planetary host stars and allow measurement of
planetary orbital parameters. Finally, we show that a low-cost, space-based
microlensing survey can provide a comprehensive statistical census of
extrasolar planetary systems with sensitivity down to 0.1 Earth-masses at
separations ranging from 0.5 AU to infinity.Comment: 43 pages. Very similar to chapter 3 of Exoplanets: Detection,
Formation, Properties, Habitability, John Mason, ed. Springer (April 3, 2008
- …