862 research outputs found
Using real options to select stable Middleware-induced software architectures
The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures
Algorithms for automatic parallelism, optimization, and optimal processor utilization
In this thesis we first investigate the reaching definitions optimization. This compiler optimization collects and stores information about where a variable is defined and how long that definition of the variable stays alive before it is redefined. We compare the old iterative solution to a new algorithm that uses the partialout concept. The partialout solution decreases execution time by eliminating the multiple passes required in the iterative solution. Currently, compilers that find a data dependence between two statements in a loop do not parallelize the loop. Next we investigate automatic parallelism for these loops by breaking the loop into a set of smaller loops, each of which contains no dependencies and thus can be executed in parallel. Finally, we introduce a set of algorithms for optimal processor utilization. The algorithms split the loop into a sequential series of parallel blocks, each block executing in parallel and utilizing the optimal number of processors possible
Recommended from our members
Radiative budget and cloud radiative effect over the Atlantic from ship-based observations
The aim of this study is to determine cloud-type resolved cloud radiative budgets and cloud radiative effects from surface measurements of broadband radiative fluxes over the Atlantic Ocean. Furthermore, based on simultaneous observations of the state of the cloudy atmosphere, a radiative closure study has been performed by means of the ECHAM5 single column model in order to identify the model's ability to realistically reproduce the effects of clouds on the climate system.
An extensive database of radiative and atmospheric measurements has been established along five meridional cruises of the German research icebreaker Polarstern. Besides pyranometer and pyrgeometer for downward broadband solar and thermal radiative fluxes, a sky imager and a microwave radiometer have been utilized to determine cloud fraction and cloud type on the one hand and temperature and humidity profiles as well as liquid water path for warm non-precipitating clouds on the other hand.
Averaged over all cruise tracks, we obtain a total net (solar + thermal) radiative flux of 144 W m−2 that is dominated by the solar component. In general, the solar contribution is large for cirrus clouds and small for stratus clouds. No significant meridional dependencies were found for the surface radiation budgets and cloud effects. The strongest surface longwave cloud effects were shown in the presence of low level clouds. Clouds with a high optical density induce strong negative solar radiative effects under high solar altitudes. The mean surface net cloud radiative effect is −33 W m−2.
For the purpose of quickly estimating the mean surface longwave, shortwave and net cloud effects in moderate, subtropical and tropical climate regimes, a new parameterisation was created, considering the total cloud amount and the solar zenith angle.
The ECHAM5 single column model provides a surface net cloud effect that is more cooling by 17 W m−2 compared to the radiation observations. This overestimation in solar cooling is mostly caused by the shortwave impact of convective clouds. The latter show a large overestimation in solar cooling of up to 114 W m−2. Mean cloud radiative effects of cirrus and stratus clouds were simulated close to the observations
Coupled Cluster Channels in the Homogeneous Electron Gas
We discuss diagrammatic modifications to the coupled cluster doubles (CCD)
equations, wherein different groups of terms out of rings, ladders,
crossed-rings and mosaics can be removed to form approximations to the coupled
cluster method, of interest due to their similarity with various types of
random phase approximations. The finite uniform electron gas is benchmarked for
14- and 54-electron systems at the complete basis set limit over a wide density
range and performance of different flavours of CCD are determined. These
results confirm that rings generally overcorrelate and ladders generally
undercorrelate; mosaics-only CCD yields a result surprisingly close to CCD. We
use a recently developed numerical analysis [J. J. Shepherd and A. Gr\"uneis,
Phys. Rev. Lett. 110, 226401 (2013)] to study the behaviours of these methods
in the thermodynamic limit. We determine that the mosaics, on forming the
Brueckner Hamltonian, open a gap in the effective one-particle eigenvalues at
the Fermi energy. Numerical evidence is presented which shows that methods
based on this renormalisation have convergent energies in the thermodynamic
limit including mosaic-only CCD, which is just a renormalised MP2. All other
methods including only a single channel, namely ladder-only CCD, ring-only CCD
and crossed-ring-only CCD, appear to yield divergent energies; incorporation of
mosaic terms prevents this from happening.Comment: 9 pages, 4 figures, 1 table. Comments welcome: [email protected]
Inference of a mesoscopic population model from population spike trains
To understand how rich dynamics emerge in neural populations, we require models exhibiting a wide range of activity patterns while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single neuron scale to empirical population data. To close this gap, we propose to fit such data at a meso scale, using a mechanistic but low-dimensional and hence statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous `pools' of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived, and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent to a mesoscopic population model impact the accuracy of the inferred single-neuron parameters
Teaching deep neural networks to localize sources in super-resolution microscopy by combining simulation-based learning and unsupervised learning
Single-molecule localization microscopy constructs super-resolution images by the sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and a new training algorithm which enables this deep network to solve the Bayesian inverse problem of detecting and localizing single molecules. Our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. Our training algorithm combines simulation-based supervised learning with autoencoder-based unsupervised learning to make it more robust against mismatch in the generative model. We demonstrate the performance of our method on datasets imaged using a variety of point spread functions and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy in data with low fluorophore density, they are confounded by high densities. Our method significantly outperforms the state of the art at high densities and thus, enables faster imaging than previous approaches. Our work also more generally shows how to train deep networks to solve challenging Bayesian inverse problems in biology and physics
- …