1,596 research outputs found
CLUDE: An Efficient Algorithm for LU Decomposition Over a Sequence of Evolving Graphs
Session: Matrix Factorization, Clustering and Probabilistic DataIn many applications, entities and their relationships are
represented by graphs. Examples include the WWW (web
pages and hyperlinks) and bibliographic networks (authors
and co-authorship). A graph can be conveniently modeled
by a matrix from which various quantitative measures are
derived. Some example measures include PageRank and
SALSA (which measure nodesβ importance), and Personalized
PageRank and Random Walk with Restart (which measure
proximities between nodes). To compute these measures,
linear systems of the form Ax = b, where A is a matrix
that captures a graphβs structure, need to be solved. To
facilitate solving the linear system, the matrix A is often decomposed
into two triangular matrices (L and U). In a dynamic
world, the graph that models it changes with time and
thus is the matrix A that represents the graph. We consider
a sequence of evolving graphs and its associated sequence of
evolving matrices. We study how LU-decomposition should
be done over the sequence so that (1) the decomposition
is efficient and (2) the resulting LU matrices best preserve
the sparsity of the matrices Aβs (i.e., the number of extra
non-zero entries introduced in L and U are minimized.) We
propose a cluster-based algorithm CLUDE for solving the
problem. Through an experimental study, we show that
CLUDE is about an order of magnitude faster than the
traditional incremental update algorithm. The number of
extra non-zero entries introduced by CLUDE is also about
an order of magnitude fewer than that of the traditional
algorithm. CLUDE is thus an efficient algorithm for LU decomposition
that produces high-quality LU matrices over an
evolving matrix sequence.published_or_final_versio
Controlling light-with-light without nonlinearity
According to Huygens' superposition principle, light beams traveling in a
linear medium will pass though one another without mutual disturbance. Indeed,
it is widely held that controlling light signals with light requires intense
laser fields to facilitate beam interactions in nonlinear media, where the
superposition principle can be broken. We demonstrate here that two coherent
beams of light of arbitrarily low intensity can interact on a metamaterial
layer of nanoscale thickness in such a way that one beam modulates the
intensity of the other. We show that the interference of beams can eliminate
the plasmonic Joule losses of light energy in the metamaterial or, in contrast,
can lead to almost total absorbtion of light. Applications of this phenomenon
may lie in ultrafast all-optical pulse-recovery devices, coherence filters and
THz-bandwidth light-by-light modulators
How To Perform Meaningful Estimates of Genetic Effects
Although the genotype-phenotype map plays a central role both in Quantitative and Evolutionary Genetics, the formalization of a completely general and satisfactory model of genetic effects, particularly accounting for epistasis, remains a theoretical challenge. Here, we use a two-locus genetic system in simulated populations with epistasis to show the convenience of using a recently developed model, NOIA, to perform estimates of genetic effects and the decomposition of the genetic variance that are orthogonal even under deviations from the Hardy-Weinberg proportions. We develop the theory for how to use this model in interval mapping of quantitative trait loci using Halley-Knott regressions, and we analyze a real data set to illustrate the advantage of using this approach in practice. In this example, we show that departures from the Hardy-Weinberg proportions that are expected by sampling alone substantially alter the orthogonal estimates of genetic effects when other statistical models, like F2 or G2A, are used instead of NOIA. Finally, for the first time from real data, we provide estimates of functional genetic effects as sets of effects of natural allele substitutions in a particular genotype, which enriches the debate on the interpretation of genetic effects as implemented both in functional and in statistical models. We also discuss further implementations leading to a completely general genotype-phenotype map
Fermion Condensates of massless at Finite Density in non-trivial Topological Sectors
Vacuum expectation values of products of local bilinears are
computed in massless at finite density. It is shown that chiral
condensates exhibit an oscillatory inhomogeneous behaviour depending on the
chemical potential. The use of a path-integral approach clarifies the
connection of this phenomenon with the topological structure of the theory.Comment: 16 pages, no figures, To be published in Phys.Rev.
Magnetoelectric Coupling in epsilon-Fe2O3
Nanoparticles of the ferrimagnetic epsilon-Fe2O3 oxide have been synthesized
by sol-gel method. Here, we report on the measurements of the dielectric
permittivity as a function of temperature, frequency and magnetic field. It is
found that, coinciding with the transition from collinear ferrimagnetic
ordering to an incommensurate magnetic state occurring at about 100 K, there is
an abrupt change (about 30 %) of permittivity suggesting the existence of a
magnetoelectric coupling in this material. Indeed, magnetic field dependent
measurements at 100 K have revealed an increase of the permittivity by about
0.3 % in 6 T. Prospective advantages of epsilon-Fe2O3 as multiferroic material
are discussed.Comment: 17 pages, 4 figures, submitted to Nanotechnolog
Relationship between site of oesophageal cancer and areca chewing and smoking in Taiwan
Among 309 male patients, those who had heavily consumed betel and tobacco were more likely than nonchewers (OR = 2. 91; 95% CI = 1.36-6.25) and nonsmokers (OR = 2.49; 95% CI = 1.02-6.08) to develop cancer in the upper and middle third of the oesophagus, respectively; the effects of alcohol did not dominate in any third
Multiflavor Correlation Functions in non-Abelian Gauge Theories at Finite Density in two dimensions
We compute vacuum expectation values of products of fermion bilinears for
two-dimensional Quantum Chromodynamics at finite flavored fermion densities. We
introduce the chemical potential as an external charge distribution within the
path-integral approach and carefully analyse the contribution of different
topological sectors to fermion correlators. We show the existence of chiral
condensates exhibiting an oscillatory inhomogeneous behavior as a function of a
chemical potential matrix. This result is exact and goes in the same direction
as the behavior found in QCD_4 within the large N approximation.Comment: 28 pages Latex (3 pages added and other minor changes) to appear in
Phys.Rev.
Variable selection for large p small n regression models with incomplete data: Mapping QTL with epistases
<p>Abstract</p> <p>Background</p> <p>Identifying quantitative trait loci (QTL) for both additive and epistatic effects raises the statistical issue of selecting variables from a large number of candidates using a small number of observations. Missing trait and/or marker values prevent one from directly applying the classical model selection criteria such as Akaike's information criterion (AIC) and Bayesian information criterion (BIC).</p> <p>Results</p> <p>We propose a two-step Bayesian variable selection method which deals with the sparse parameter space and the small sample size issues. The regression coefficient priors are flexible enough to incorporate the characteristic of "large <it>p </it>small <it>n</it>" data. Specifically, sparseness and possible asymmetry of the significant coefficients are dealt with by developing a Gibbs sampling algorithm to stochastically search through low-dimensional subspaces for significant variables. The superior performance of the approach is demonstrated via simulation study. We also applied it to real QTL mapping datasets.</p> <p>Conclusion</p> <p>The two-step procedure coupled with Bayesian classification offers flexibility in modeling "large p small n" data, especially for the sparse and asymmetric parameter space. This approach can be extended to other settings characterized by high dimension and low sample size.</p
- β¦