854 research outputs found

    Using real options to select stable Middleware-induced software architectures

    Get PDF
    The requirements that force decisions towards building distributed system architectures are usually of a non-functional nature. Scalability, openness, heterogeneity, and fault-tolerance are examples of such non-functional requirements. The current trend is to build distributed systems with middleware, which provide the application developer with primitives for managing the complexity of distribution, system resources, and for realising many of the non-functional requirements. As non-functional requirements evolve, the `coupling' between the middleware and architecture becomes the focal point for understanding the stability of the distributed software system architecture in the face of change. It is hypothesised that the choice of a stable distributed software architecture depends on the choice of the underlying middleware and its flexibility in responding to future changes in non-functional requirements. Drawing on a case study that adequately represents a medium-size component-based distributed architecture, it is reported how a likely future change in scalability could impact the architectural structure of two versions, each induced with a distinct middleware: one with CORBA and the other with J2EE. An option-based model is derived to value the flexibility of the induced-architectures and to guide the selection. The hypothesis is verified to be true for the given change. The paper concludes with some observations that could stimulate future research in the area of relating requirements to software architectures

    Algorithms for automatic parallelism, optimization, and optimal processor utilization

    Full text link
    In this thesis we first investigate the reaching definitions optimization. This compiler optimization collects and stores information about where a variable is defined and how long that definition of the variable stays alive before it is redefined. We compare the old iterative solution to a new algorithm that uses the partialout concept. The partialout solution decreases execution time by eliminating the multiple passes required in the iterative solution. Currently, compilers that find a data dependence between two statements in a loop do not parallelize the loop. Next we investigate automatic parallelism for these loops by breaking the loop into a set of smaller loops, each of which contains no dependencies and thus can be executed in parallel. Finally, we introduce a set of algorithms for optimal processor utilization. The algorithms split the loop into a sequential series of parallel blocks, each block executing in parallel and utilizing the optimal number of processors possible

    Music practices in programs for the academically talented and gifted music students in selected Illinois secondary schools--District VI

    Get PDF

    Coupled Cluster Channels in the Homogeneous Electron Gas

    Get PDF
    We discuss diagrammatic modifications to the coupled cluster doubles (CCD) equations, wherein different groups of terms out of rings, ladders, crossed-rings and mosaics can be removed to form approximations to the coupled cluster method, of interest due to their similarity with various types of random phase approximations. The finite uniform electron gas is benchmarked for 14- and 54-electron systems at the complete basis set limit over a wide density range and performance of different flavours of CCD are determined. These results confirm that rings generally overcorrelate and ladders generally undercorrelate; mosaics-only CCD yields a result surprisingly close to CCD. We use a recently developed numerical analysis [J. J. Shepherd and A. Gr\"uneis, Phys. Rev. Lett. 110, 226401 (2013)] to study the behaviours of these methods in the thermodynamic limit. We determine that the mosaics, on forming the Brueckner Hamltonian, open a gap in the effective one-particle eigenvalues at the Fermi energy. Numerical evidence is presented which shows that methods based on this renormalisation have convergent energies in the thermodynamic limit including mosaic-only CCD, which is just a renormalised MP2. All other methods including only a single channel, namely ladder-only CCD, ring-only CCD and crossed-ring-only CCD, appear to yield divergent energies; incorporation of mosaic terms prevents this from happening.Comment: 9 pages, 4 figures, 1 table. Comments welcome: [email protected]

    Inference of a mesoscopic population model from population spike trains

    No full text
    To understand how rich dynamics emerge in neural populations, we require models exhibiting a wide range of activity patterns while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single neuron scale to empirical population data. To close this gap, we propose to fit such data at a meso scale, using a mechanistic but low-dimensional and hence statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous `pools' of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived, and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent to a mesoscopic population model impact the accuracy of the inferred single-neuron parameters

    How biased are maximum entropy models?

    No full text

    Teaching deep neural networks to localize sources in super-resolution microscopy by combining simulation-based learning and unsupervised learning

    No full text
    Single-molecule localization microscopy constructs super-resolution images by the sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and a new training algorithm which enables this deep network to solve the Bayesian inverse problem of detecting and localizing single molecules. Our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. Our training algorithm combines simulation-based supervised learning with autoencoder-based unsupervised learning to make it more robust against mismatch in the generative model. We demonstrate the performance of our method on datasets imaged using a variety of point spread functions and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy in data with low fluorophore density, they are confounded by high densities. Our method significantly outperforms the state of the art at high densities and thus, enables faster imaging than previous approaches. Our work also more generally shows how to train deep networks to solve challenging Bayesian inverse problems in biology and physics
    • …
    corecore