3,587 research outputs found
On-Line Dependability Enhancement of Multiprocessor SoCs by Resource Management
This paper describes a new approach towards dependable design of homogeneous multi-processor SoCs in an example satellite-navigation application. First, the NoC dependability is functionally verified via embedded software. Then the Xentium processor tiles are periodically verified via on-line self-testing techniques, by using a new IIP Dependability Manager. Based on the Dependability Manager results, faulty tiles are electronically excluded and replaced by fault-free spare tiles via on-line resource management. This integrated approach enables fast electronic fault detection/diagnosis and repair, and hence a high system availability. The dependability application runs in parallel with the actual application, resulting in a very dependable system. All parts have been verified by simulation
Parallel Algorithm for Solving Kepler's Equation on Graphics Processing Units: Application to Analysis of Doppler Exoplanet Searches
[Abridged] We present the results of a highly parallel Kepler equation solver
using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX
and the "Compute Unified Device Architecture" programming environment. We apply
this to evaluate a goodness-of-fit statistic (e.g., chi^2) for Doppler
observations of stars potentially harboring multiple planetary companions
(assuming negligible planet-planet interactions). We tested multiple
implementations using single precision, double precision, pairs of single
precision, and mixed precision arithmetic. We find that the vast majority of
computations can be performed using single precision arithmetic, with selective
use of compensated summation for increased precision. However, standard single
precision is not adequate for calculating the mean anomaly from the time of
observation and orbital period when evaluating the goodness-of-fit for real
planetary systems and observational data sets. Using all double precision, our
GPU code outperforms a similar code using a modern CPU by a factor of over 60.
Using mixed-precision, our GPU code provides a speed-up factor of over 600,
when evaluating N_sys > 1024 models planetary systems each containing N_pl = 4
planets and assuming N_obs = 256 observations of each system. We conclude that
modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's
equation and a goodness-of-fit statistic for orbital models when presented with
a large parameter space.Comment: 19 pages, to appear in New Astronom
A Bayesian palaeoenvironmental transfer function model for acidified lakes
A Bayesian approach to palaeoecological environmental reconstruction deriving from the unimodal responses generally exhibited by organisms to an environmental gradient is described. The approach uses Bayesian model selection to calculate a collection of probability-weighted, species-specific response curves (SRCs) for each taxon within a training set, with an explicit treatment for zero abundances. These SRCs are used to reconstruct the environmental variable from sub-fossilised assemblages. The approach enables a substantial increase in computational efficiency (several orders of magnitude) over existing Bayesian methodologies. The model is developed from the Surface Water Acidification Programme (SWAP) training set and is demonstrated to exhibit comparable predictive power to existing Weighted Averaging and Maximum Likelihood methodologies, though with improvements in bias; the additional explanatory power of the Bayesian approach lies in an explicit calculation of uncertainty for each individual reconstruction. The model is applied to reconstruct the Holocene acidification history of the Round Loch of Glenhead, including a reconstruction of recent recovery derived from sediment trap data.The Bayesian reconstructions display similar trends to conventional (Weighted Averaging Partial Least Squares) reconstructions but provide a better reconstruction of extreme pH and are more sensitive to small changes in diatom assemblages. The validity of the posteriors as an apparently meaningful representation of assemblage-specific uncertainty and the high computational efficiency of the approach open up the possibility of highly constrained multiproxy reconstructions
De Nederlandse eurocommissaris (1958-2010)
In het voorgaande zijn de tien Nederlanders de revue gepasseerd die vanaf 1952 deel hebben uitgemaakt van de Hoge Autoriteit van de Europese Gemeenschap voor Kolen en Staal (egks) en sinds 1958 van de Commissievan de Europese Economische Gemeenschap (eeg) en van de Europese Atoomenergie Gemeenschap (Euratom). Hoewel hun politieke kleur varieert, lijken zij op het eerste gezicht veel op elkaar: door de bank genomen gaat het om mannen van middelbare leeftijd met een lange loopbaan in de Haagse politiek. Klopt deze eerste indruk? En hoe verhouden de kenmerken van de Nederlandse eurocommissarissen zich tot het profiel van hun collega’s uit de andere lidstaten?In deze slotbeschouwing wordt een beeld gegeven van ‘het’ Nederlandse lid van de Europese Commissie. De Hoge Autoriteit van de egks blijft hierbij buiten beschouwing, evenals de Euratomcommissie. Van beide organen waren er voor een uitvoerige analyse onvoldoende gegevens voorhanden over de niet-Nederlandse leden. Wel kan worden opgemerkt dat Dirk Spierenburg en Hans Linthorst Homan, die beiden deel uitmaakten van de Hoge Autoriteit, met hun overwegend ambtelijke en diplomatieke achtergrond nogal afweken van de overige Nederlandse leden van de Europese Commissie: landelijke politici zijn zij nooit geweest
Fast Huffman decoding by exploiting data level parallelism
The frame rates and resolutions of digital videos are on the rising edge. Thereby, pushing the compression ratios of video coding standards to their limits, resulting in more complex and computational power hungry algorithms. Programmable solutions are gaining interest to keep up the pace of the evolving video coding standards, by reducing the time-to-market of upcoming video products. However, to compete with hardwired solutions, parallelism needs to be exploited on as many levels as possible. In this paper the focus will be on data level parallelism. Huffman coding is proven to be very efficient and therefore commonly applied in many coding standards. However, due to the inherently sequential nature, parallelization of the Huffman decoding is considered hard. The proposed fully flexible and programmable acceleration exploits available data level parallelism in Huffman decoding. Our implementation achieves a decoding speed of 106 MBit/s while running on a 250 MHz processor. This is a speed-up of 24× compared to our sequential reference implementation
On the role of the cellular prion protein in the uptake and signaling of pathological aggregates in neurodegenerative diseases
Neurodegenerative disorders are associated with intra- or extra-cellular deposition of aggregates of misfolded insoluble proteins. These deposits composed of tau, amyloid-\u3b2 or \u3b1-synuclein spread from cell to cell, in a prion-like manner. Novel evidence suggests that the circulating soluble oligomeric species of these misfolded proteins could play a major role in pathology, while insoluble aggregates would represent their protective less toxic counterparts. Recent convincing data support the proposition that the cellular prion protein, PrPC, act as a toxicity-inducing receptor for amyloid-\u3b2 oligomers. As a consequence, several studies focused their investigations to the role played by PrPC in binding other protein aggregates, such as tau and \u3b1-synuclein, for its possible common role in mediating toxic signalling. The biological relevance of PrPC as key ligand and potential mediator of toxicity for multiple proteinaceous aggregated species, prions or PrPSc included, could lead to relevant therapeutic implications. Here we describe the structure of PrPC and the proposed interplay with its pathological counterpart PrPSc and then we recapitulate the most recent findings regarding the role of PrPC in the interaction with aggregated forms of other neurodegeneration-associated proteins
Cortical thickness, surface area and volume measures in Parkinson's disease, multiple system atrophy and progressive supranuclear palsy
OBJECTIVE
Parkinson's disease (PD), Multiple System Atrophy (MSA) and Progressive Supranuclear Palsy (PSP) are neurodegenerative diseases that can be difficult to distinguish clinically. The objective of the current study was to use surface-based analysis techniques to assess cortical thickness, surface area and grey matter volume to identify unique morphological patterns of cortical atrophy in PD, MSA and PSP and to relate these patterns of change to disease duration and clinical features.
METHODS
High resolution 3D T1-weighted MRI volumes were acquired from 14 PD patients, 18 MSA, 14 PSP and 19 healthy control participants. Cortical thickness, surface area and volume analyses were carried out using the automated surface-based analysis package FreeSurfer (version 5.1.0). Measures of disease severity and duration were assessed for correlation with cortical morphometric changes in each clinical group.
RESULTS
Results show that in PSP, widespread cortical thinning and volume loss occurs within the frontal lobe, particularly the superior frontal gyrus. In addition, PSP patients also displayed increased surface area in the pericalcarine. In comparison, PD and MSA did not display significant changes in cortical morphology.
CONCLUSION
These results demonstrate that patients with clinically established PSP exhibit distinct patterns of cortical atrophy, particularly affecting the frontal lobe. These results could be used in the future to develop a useful clinical application of MRI to distinguish PSP patients from PD and MSA patients
- …