83,895 research outputs found
Data Systems Dynamic Simulator
The Data System Dynamic Simulator (DSDS) is a discrete event simulation tool. It was developed for NASA for the specific purpose of evaluating candidate architectures for data systems of the Space Station era. DSDS provides three methods for meeting this requirement. First, the user has access to a library of standard pre-programmed elements. These elements represent tailorable components of NASA data systems and can be connected in any logical manner. Secondly, DSDS supports the development of additional elements. This allows the more sophisticated DSDS user the option of extending the standard element set. Thirdly, DSDS supports the use of data streams simulation. Data streams is the name given to a technique that ignores packet boundaries, but is sensitive to rate changes. Because rate changes are rare compared to packet arrivals in a typical NASA data system, data stream simulations require a fraction of the CPU run time. Additionally, the data stream technique is considerably more accurate than another commonly-used optimization technique
Science data systems
Shock tests on Mariner Venus 67 prototype data automation subsystems to evaluate multilayer laminate packagin
Developing and Enhancing Data Systems
This brief focuses on considerations for developing and enhancing data systems. The other two briefs focus on developing a coherent plan for effectively using data and on supporting the effective use of a data system
A database system with amnesia
Big Data comes with huge challenges. Its volume and velocity makes handling, curating, and analytical processing a costly affair. Even to simply “look at” the data within an a priori defined budget and with a guaranteed interactive response time might be impossible to achieve. Commonly applied scale-out approaches will hit the technology and monetary wall soon, if not done so already. Likewise, blindly rejecting data when the channels are full, or reducing the data resolution at the source, might lead to loss of valuable observations. An army of well-educated database administrators or full software stack architects might deal with these challenges albeit at substantial cost. This calls for a mostly knobless DBMS with a fundamental change in database management. Data rotting has been proposed as a direction to find a solution [10, 11]. For the sake of storage management and responsiveness, it lets the DBMS semi-autonomously rot away data. Rotting is based on the systems own unwillingness to keep old data as easily accessible as fresh data. This paper sheds more light on the opportunities and potential impacts of this radical departure in data management. Specifically, we study the case where a DBMS selectively forgets tuples (by marking them inactive) under various amnesia scenarios and with different implementation strategies. Our ultimate goal is to use the findings of this study to morph an existing data management engine to serve demanding big data scientific applications with well-chosen built-in data a
Industrial Data Systems, Inc.
Industrial Date Systems Corporation (IDS) is a Houston-based general engineering and services firm targeted toward the energy industry. Founded in 1985, the company has grown to annual revenues of 68 million. Together, they potentially will be able to fully meet client need for both upstream and downstream engineering and services support in the oil/gas, refining, chemicals, and petrochemicals industries. Now they must make it work. (Contact author for a copy of the complete report.)Small Business Mgmt
Designing Traceability into Big Data Systems
Providing an appropriate level of accessibility and traceability to data or
process elements (so-called Items) in large volumes of data, often
Cloud-resident, is an essential requirement in the Big Data era.
Enterprise-wide data systems need to be designed from the outset to support
usage of such Items across the spectrum of business use rather than from any
specific application view. The design philosophy advocated in this paper is to
drive the design process using a so-called description-driven approach which
enriches models with meta-data and description and focuses the design process
on Item re-use, thereby promoting traceability. Details are given of the
description-driven design of big data systems at CERN, in health informatics
and in business process management. Evidence is presented that the approach
leads to design simplicity and consequent ease of management thanks to loose
typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International
Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore
July 2015. arXiv admin note: text overlap with arXiv:1402.5764,
arXiv:1402.575
Model Reduction for Aperiodically Sampled Data Systems
Two approaches to moment matching based model reduction of aperiodically
sampled data systems are given. The term "aperiodic sampling" is used in the
paper to indicate that the time between two consecutive sampling instants can
take its value from a pre-specified finite set of allowed sampling intervals.
Such systems can be represented by discrete-time linear switched (LS) state
space (SS) models. One of the approaches investigated in the paper is to apply
model reduction by moment matching on the linear time-invariant (LTI) plant
model, then compare the responses of the LS SS models acquired from the
original and reduced order LTI plants. The second approach is to apply a moment
matching based model reduction method on the LS SS model acquired from the
original LTI plant, and then compare the responses of the original and reduced
LS SS models. It is proven that for both methods, as long as the original LTI
plant is stable, the resulting reduced order LS SS model of the sampled data
system is quadratically stable. The results from two approaches are compared
with numerical examples
Ground data systems resource allocation process
The Ground Data Systems Resource Allocation Process at the Jet Propulsion Laboratory provides medium- and long-range planning for the use of Deep Space Network and Mission Control and Computing Center resources in support of NASA's deep space missions and Earth-based science. Resources consist of radio antenna complexes and associated data processing and control computer networks. A semi-automated system was developed that allows operations personnel to interactively generate, edit, and revise allocation plans spanning periods of up to ten years (as opposed to only two or three weeks under the manual system) based on the relative merit of mission events. It also enhances scientific data return. A software system known as the Resource Allocation and Planning Helper (RALPH) merges the conventional methods of operations research, rule-based knowledge engineering, and advanced data base structures. RALPH employs a generic, highly modular architecture capable of solving a wide variety of scheduling and resource sequencing problems. The rule-based RALPH system has saved significant labor in resource allocation. Its successful use affirms the importance of establishing and applying event priorities based on scientific merit, and the benefit of continuity in planning provided by knowledge-based engineering. The RALPH system exhibits a strong potential for minimizing development cycles of resource and payload planning systems throughout NASA and the private sector
Sampled data systems and generating functions
Application of Z-transforms to sampled-data system
- …