125,408 research outputs found
Recommended from our members
Centralized vs. decentralized computing : organizational considerations and management options
The long-standing debate over whether to centralize or decentralize computing is examined in terms of the fundamental organizational and economic factors at stake. The traditional debate is examined and found to focus predominantly on issues of efficiency vs. effectiveness, with solutions based on a rationalistic strategy of optimizing in this tradeoff. A more behavioralistic assessment suggests that the driving issues in the debate are the politics of organization and resources, centering on the issue of control. The economics of computing deployment decisions is presented as an important issue, but one that often serves as a field of argument that is based on more political concerns. The current situation facing managers of computing, given the advent of small and comparatively inexpensive computers, is examined in detail, and a set of management options for dealing with this persistent issue is presented
A Pattern Language for High-Performance Computing Resilience
High-performance computing systems (HPC) provide powerful capabilities for
modeling, simulation, and data analytics for a broad class of computational
problems. They enable extreme performance of the order of quadrillion
floating-point arithmetic calculations per second by aggregating the power of
millions of compute, memory, networking and storage components. With the
rapidly growing scale and complexity of HPC systems for achieving even greater
performance, ensuring their reliable operation in the face of system
degradations and failures is a critical challenge. System fault events often
lead the scientific applications to produce incorrect results, or may even
cause their untimely termination. The sheer number of components in modern
extreme-scale HPC systems and the complex interactions and dependencies among
the hardware and software components, the applications, and the physical
environment makes the design of practical solutions that support fault
resilience a complex undertaking. To manage this complexity, we developed a
methodology for designing HPC resilience solutions using design patterns. We
codified the well-known techniques for handling faults, errors and failures
that have been devised, applied and improved upon over the past three decades
in the form of design patterns. In this paper, we present a pattern language to
enable a structured approach to the development of HPC resilience solutions.
The pattern language reveals the relations among the resilience patterns and
provides the means to explore alternative techniques for handling a specific
fault model that may have different efficiency and complexity characteristics.
Using the pattern language enables the design and implementation of
comprehensive resilience solutions as a set of interconnected resilience
patterns that can be instantiated across layers of the system stack.Comment: Proceedings of the 22nd European Conference on Pattern Languages of
Program
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
Challenges in Complex Systems Science
FuturICT foundations are social science, complex systems science, and ICT.
The main concerns and challenges in the science of complex systems in the
context of FuturICT are laid out in this paper with special emphasis on the
Complex Systems route to Social Sciences. This include complex systems having:
many heterogeneous interacting parts; multiple scales; complicated transition
laws; unexpected or unpredicted emergence; sensitive dependence on initial
conditions; path-dependent dynamics; networked hierarchical connectivities;
interaction of autonomous agents; self-organisation; non-equilibrium dynamics;
combinatorial explosion; adaptivity to changing environments; co-evolving
subsystems; ill-defined boundaries; and multilevel dynamics. In this context,
science is seen as the process of abstracting the dynamics of systems from
data. This presents many challenges including: data gathering by large-scale
experiment, participatory sensing and social computation, managing huge
distributed dynamic and heterogeneous databases; moving from data to dynamical
models, going beyond correlations to cause-effect relationships, understanding
the relationship between simple and comprehensive models with appropriate
choices of variables, ensemble modeling and data assimilation, modeling systems
of systems of systems with many levels between micro and macro; and formulating
new approaches to prediction, forecasting, and risk, especially in systems that
can reflect on and change their behaviour in response to predictions, and
systems whose apparently predictable behaviour is disrupted by apparently
unpredictable rare or extreme events. These challenges are part of the FuturICT
agenda
Dynamics of Rumor Spreading in Complex Networks
We derive the mean-field equations characterizing the dynamics of a rumor
process that takes place on top of complex heterogeneous networks. These
equations are solved numerically by means of a stochastic approach. First, we
present analytical and Monte Carlo calculations for homogeneous networks and
compare the results with those obtained by the numerical method. Then, we study
the spreading process in detail for random scale-free networks. The time
profiles for several quantities are numerically computed, which allow us to
distinguish among different variants of rumor spreading algorithms. Our
conclusions are directed to possible applications in replicated database
maintenance, peer to peer communication networks and social spreading
phenomena.Comment: Final version to appear in PR
Should Optimal Designers Worry About Consideration?
Consideration set formation using non-compensatory screening rules is a vital
component of real purchasing decisions with decades of experimental validation.
Marketers have recently developed statistical methods that can estimate
quantitative choice models that include consideration set formation via
non-compensatory screening rules. But is capturing consideration within models
of choice important for design? This paper reports on a simulation study of a
vehicle portfolio design when households screen over vehicle body style built
to explore the importance of capturing consideration rules for optimal
designers. We generate synthetic market share data, fit a variety of discrete
choice models to the data, and then optimize design decisions using the
estimated models. Model predictive power, design "error", and profitability
relative to ideal profits are compared as the amount of market data available
increases. We find that even when estimated compensatory models provide
relatively good predictive accuracy, they can lead to sub-optimal design
decisions when the population uses consideration behavior; convergence of
compensatory models to non-compensatory behavior is likely to require
unrealistic amounts of data; and modeling heterogeneity in non-compensatory
screening is more valuable than heterogeneity in compensatory trade-offs. This
supports the claim that designers should carefully identify consideration
behaviors before optimizing product portfolios. We also find that higher model
predictive power does not necessarily imply better design decisions; that is,
different model forms can provide "descriptive" rather than "predictive"
information that is useful for design.Comment: 5 figures, 26 pages. In Press at ASME Journal of Mechanical Design
(as of 3/17/15
- …