35,036 research outputs found
Spectral gene set enrichment (SGSE)
Motivation: Gene set testing is typically performed in a supervised context
to quantify the association between groups of genes and a clinical phenotype.
In many cases, however, a gene set-based interpretation of genomic data is
desired in the absence of a phenotype variable. Although methods exist for
unsupervised gene set testing, they predominantly compute enrichment relative
to clusters of the genomic variables with performance strongly dependent on the
clustering algorithm and number of clusters. Results: We propose a novel
method, spectral gene set enrichment (SGSE), for unsupervised competitive
testing of the association between gene sets and empirical data sources. SGSE
first computes the statistical association between gene sets and principal
components (PCs) using our principal component gene set enrichment (PCGSE)
method. The overall statistical association between each gene set and the
spectral structure of the data is then computed by combining the PC-level
p-values using the weighted Z-method with weights set to the PC variance scaled
by Tracey-Widom test p-values. Using simulated data, we show that the SGSE
algorithm can accurately recover spectral features from noisy data. To
illustrate the utility of our method on real data, we demonstrate the superior
performance of the SGSE method relative to standard cluster-based techniques
for testing the association between MSigDB gene sets and the variance structure
of microarray gene expression data. Availability:
http://cran.r-project.org/web/packages/PCGSE/index.html Contact:
[email protected] or [email protected]
On the acceleration of wavefront applications using distributed many-core architectures
In this paper we investigate the use of distributed graphics processing unit (GPU)-based architectures to accelerate pipelined wavefront applicationsāa ubiquitous class of parallel algorithms used for the solution of a number of scientific and engineering applications. Specifically, we employ a recently developed port of the LU solver (from the NAS Parallel Benchmark suite) to investigate the performance of these algorithms on high-performance computing solutions from NVIDIA (Tesla C1060 and C2050) as well as on traditional clusters (AMD/InfiniBand and IBM BlueGene/P). Benchmark results are presented for problem classes A to C and a recently developed performance model is used to provide projections for problem classes D and E, the latter of which represents a billion-cell problem. Our results demonstrate that while the theoretical performance of GPU solutions will far exceed those of many traditional technologies, the sustained application performance is currently comparable for scientific wavefront applications. Finally, a breakdown of the GPU solution is conducted, exposing PCIe overheads and decomposition constraints. A new k-blocking strategy is proposed to improve the future performance of this class of algorithm on GPU-based architectures
Business Cycle Models and Stylized Facts in Germany
The aim of this paper is to test to what extent a benchmark real and monetary business cycle model can account for some basic stylized facts with a particular emphasis on monetary variables. We calibrate the model on German data using the method proposed by Cooley and Prescott (1995). First we will analyze the dynamic properties of the models, the Impulse Response Functions and propose a variance decomposition (for the monetary BC Models). We find that even though money is not neutral in the short run, the effect of a monetary shock is only marginal compared to the productivity shock, i.e. the share of the variance of the monetary shock in the total variance of the forecast error is small and decreases rapidly. We simulate the models and compare the properties of the model economies with those of the observed data. The evidence suggests that the benchmark RBC model can account for some stylized facts in Germany. The general pattern of the relative volatilities of investment, output and consumption is replicated by the model. Nevertheless, the overall volatility is too high and the level of the relative volatilities is not well reproduced. The introduction of exogenous monetary shocks and a cash-in-advance constraint increases the relative volatilities and the cross correlation of consumption. In general the second order moments of money (M1) and inflation are not well reproduced.business cycles; money; variance decomposition
- ā¦