198,767 research outputs found
uFLIP: Understanding Flash IO Patterns
Does the advent of flash devices constitute a radical change for secondary
storage? How should database systems adapt to this new form of secondary
storage? Before we can answer these questions, we need to fully understand the
performance characteristics of flash devices. More specifically, we want to
establish what kind of IOs should be favored (or avoided) when designing
algorithms and architectures for flash-based systems. In this paper, we focus
on flash IO patterns, that capture relevant distribution of IOs in time and
space, and our goal is to quantify their performance. We define uFLIP, a
benchmark for measuring the response time of flash IO patterns. We also present
a benchmarking methodology which takes into account the particular
characteristics of flash devices. Finally, we present the results obtained by
measuring eleven flash devices, and derive a set of design hints that should
drive the development of flash-based systems on current devices.Comment: CIDR 200
Management as a system: creating value
Boston University School of Management publication from the 1990s about the MBA programs at BU, aimed at prospective MBA students
Experience with the Open Source based implementation for ATLAS Conditions Data Management System
Conditions Data in high energy physics experiments is frequently seen as
every data needed for reconstruction besides the event data itself. This
includes all sorts of slowly evolving data like detector alignment, calibration
and robustness, and data from detector control system. Also, every Conditions
Data Object is associated with a time interval of validity and a version.
Besides that, quite often is useful to tag collections of Conditions Data
Objects altogether. These issues have already been investigated and a data
model has been proposed and used for different implementations based in
commercial DBMSs, both at CERN and for the BaBar experiment. The special case
of the ATLAS complex trigger that requires online access to calibration and
alignment data poses new challenges that have to be met using a flexible and
customizable solution more in the line of Open Source components. Motivated by
the ATLAS challenges we have developed an alternative implementation, based in
an Open Source RDBMS. Several issues were investigated land will be described
in this paper:
-The best way to map the conditions data model into the relational database
concept considering what are foreseen as the most frequent queries.
-The clustering model best suited to address the scalability problem.
-Extensive tests were performed and will be described.
The very promising results from these tests are attracting the attention from
the HEP community and driving further developments.Comment: 8 pages, 4 figures, 3 tables, conferenc
Recommended from our members
Automatic Generation of Cognitive Theories using Genetic Programming
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming. Our approach evolves from experimental data cognitive theories that explain âthe mental programâ that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories
21st Century Simulation: Exploiting High Performance Computing and Data Analysis
This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded
paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to
overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel
computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in
computing power. This has been characterized as a ten-year lead over the use of single-processor computers.
Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power.
JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The
challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant
populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants,
and to understand non-linear, asymmetric warfare. These requirements stretch both current
computational techniques and data analysis methodologies. In this paper, documented examples and potential
solutions will be advanced. The authors discuss the paths to successful implementation based on their experience.
Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch,
database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses.
The modeling and simulation community has significant potential to provide more opportunities for training and
analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more
realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights,
for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased
understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses.
The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the
beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success
Tools for distributed application management
Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort
Conservation science in NOAAâs National Marine Sanctuaries: description and recent accomplishments
This report describes cases relating to the management of national marine sanctuaries in which certain scientific information was required so managers could make decisions that effectively protected trust resources. The cases presented represent only a fraction of difficult issues that marine sanctuary managers deal with daily. They include, among others, problems related to wildlife disturbance, vessel routing, marine reserve placement, watershed management, oil spill response, and habitat restoration. Scientific approaches to address these problems vary significantly, and include literature surveys, data mining, field studies (monitoring, mapping, observations, and measurement), geospatial and biogeographic analysis, and modeling. In most cases there is also an element of expert consultation and collaboration among multiple partners, agencies with resource protection responsibilities, and other users and stakeholders. The resulting management responses may involve direct intervention (e.g., for spill response or habitat restoration issues), proposal of boundary alternatives for marine sanctuaries or reserves, changes in agency policy or regulations, making recommendations to other agencies with resource protection responsibilities, proposing changes to international or domestic shipping rules, or development of new education or outreach programs. (PDF contains 37 pages.
- âŠ