2,471 research outputs found
Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena
Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is
further complicated by many theoretical issues, such as the I-equivalence among
different structures. In this work, we focus on a specific subclass of BNs,
named Suppes-Bayes Causal Networks (SBCNs), which include specific structural
constraints based on Suppes' probabilistic causation to efficiently model
cumulative phenomena. Here we compare the performance, via extensive
simulations, of various state-of-the-art search strategies, such as local
search techniques and Genetic Algorithms, as well as of distinct regularization
methods. The assessment is performed on a large number of simulated datasets
from topologies with distinct levels of complexity, various sample size and
different rates of errors in the data. Among the main results, we show that the
introduction of Suppes' constraints dramatically improve the inference
accuracy, by reducing the solution space and providing a temporal ordering on
the variables. We also report on trade-offs among different search techniques
that can be efficiently employed in distinct experimental settings. This
manuscript is an extended version of the paper "Structural Learning of
Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018
International Conference on Computational Science
Parallel Implementation of Efficient Search Schemes for the Inference of Cancer Progression Models
The emergence and development of cancer is a consequence of the accumulation
over time of genomic mutations involving a specific set of genes, which
provides the cancer clones with a functional selective advantage. In this work,
we model the order of accumulation of such mutations during the progression,
which eventually leads to the disease, by means of probabilistic graphic
models, i.e., Bayesian Networks (BNs). We investigate how to perform the task
of learning the structure of such BNs, according to experimental evidence,
adopting a global optimization meta-heuristics. In particular, in this work we
rely on Genetic Algorithms, and to strongly reduce the execution time of the
inference -- which can also involve multiple repetitions to collect
statistically significant assessments of the data -- we distribute the
calculations using both multi-threading and a multi-node architecture. The
results show that our approach is characterized by good accuracy and
specificity; we also demonstrate its feasibility, thanks to a 84x reduction of
the overall execution time with respect to a traditional sequential
implementation
Multi-objective optimization to explicitly account for model complexity when learning Bayesian Networks
Bayesian Networks have been widely used in the last decades in many fields,
to describe statistical dependencies among random variables. In general,
learning the structure of such models is a problem with considerable
theoretical interest that still poses many challenges. On the one hand, this is
a well-known NP-complete problem, which is practically hardened by the huge
search space of possible solutions. On the other hand, the phenomenon of
I-equivalence, i.e., different graphical structures underpinning the same set
of statistical dependencies, may lead to multimodal fitness landscapes further
hindering maximum likelihood approaches to solve the task. Despite all these
difficulties, greedy search methods based on a likelihood score coupled with a
regularization term to account for model complexity, have been shown to be
surprisingly effective in practice. In this paper, we consider the formulation
of the task of learning the structure of Bayesian Networks as an optimization
problem based on a likelihood score. Nevertheless, our approach do not adjust
this score by means of any of the complexity terms proposed in the literature;
instead, it accounts directly for the complexity of the discovered solutions by
exploiting a multi-objective optimization procedure. To this extent, we adopt
NSGA-II and define the first objective function to be the likelihood of a
solution and the second to be the number of selected arcs. We thoroughly
analyze the behavior of our method on a wide set of simulated data, and we
discuss the performance considering the goodness of the inferred solutions both
in terms of their objective functions and with respect to the retrieved
structure. Our results show that NSGA-II can converge to solutions
characterized by better likelihood and less arcs than classic approaches,
although paradoxically frequently characterized by a lower similarity to the
target network
Single-channel analysis of a ClC-2-like chloride conductance in cultured rat cortical astrocytes
AbstractThe single-channel behavior of the hyperpolarization-activated, ClC-2-like inwardly rectifying Cl− current (IClh), induced by long-term dibutyryl-cyclic-AMP-treated cultured cortical rat astrocytes, was analyzed with the patch-clamp technique. In outside-out patches in symmetrical 144 mM Cl− solutions, openings of hyperpolarization-activated small-conductance Cl− channels revealed burst activity of two equidistant conductance levels of 3 and 6 pS. The unitary openings displayed slow activation kinetics. The probabilities of the closed and conducting states were consistent with a double-barrelled structure of the channel protein. These results suggest that the astrocytic ClC-2-like Cl− current IClh is mediated by a small-conductance Cl− channel, which has the same structural motif as the Cl− channel prototype ClC-0
<x>_{u-d} from lattice QCD at nearly physical quark masses
We determine the second Mellin moment of the isovector quark parton
distribution function _{u-d} from lattice QCD with N_f=2 sea quark flavours,
employing the non-perturbatively improved Wilson-Sheikholeslami-Wohlert action
at a pseudoscalar mass of 157(6) MeV. The result is converted
non-perturbatively to the RI'-MOM scheme and then perturbatively to the MSbar
scheme at a scale mu = 2 GeV. As the quark mass is reduced we find the lattice
prediction to approach the value extracted from experiments.Comment: 4 pages, 3 figures, v2: minor updates including journal ref
Acute pulmonary hypertension caused by tumor embolism: a report of two cases.
Acute pulmonary hypertension leading to right ventricular failure and circulatory collapse is usually caused by thromboembolic obstruction of the pulmonary circulation. However, in rare instances, other causes can be associated with a similar clinical presentation. We present and discuss the clinical histories of two patients with acute right ventricular failure due to an atypical cause of pulmonary hypertension, disseminated pulmonary tumor embolism
A lattice study of the strangeness content of the nucleon
We determine the quark contributions to the nucleon spin Delta s, Delta u and
Delta d as well as their contributions to the nucleon mass, the sigma-terms.
This is done by computing both, the quark line connected and disconnected
contributions to the respective matrix elements, using the non-perturbatively
improved Sheikholeslami-Wohlert Wilson Fermionic action. We simulate n_F=2 mass
degenerate sea quarks with a pion mass of about 285 MeV and a lattice spacing a
= 0.073 fm. The renormalization of the matrix elements involves mixing between
contributions from different quark flavours. The pion-nucleon sigma-term is
extrapolated to physical quark masses exploiting the sea quark mass dependence
of the nucleon mass. We obtain the renormalized value sigma_{piN}=38(12) MeV at
the physical point and the strangeness fraction
f_{Ts}=sigma_s/m_N=0.012(14)(+10-3) at our larger than physical sea quark mass.
For the strangeness contribution to the nucleon spin we obtain in the MSbar
scheme at the renormalization scale of 2.71 GeV Delta s = -0.020(10)(2).Comment: 7 pages, 3 figures, Invited Talk at the 33rd Erice School on Nuclear
Physics, Erice, 16-24 September 2011, Ital
- …