104 research outputs found
Performance study of a COTS Distributed DBMS adapted for multilevel security
Multilevel secure database management system (MLS/DBMS) products
no longer enjoy direct commercial-off-the-shelf (COTS) support.
Meanwhile, existing users of these MLS/DBMS products continue to
rely on them to satisfy their multilevel security requirements.
This calls for a new approach to developing MLS/DBMS systems, one
that relies on adapting the features of existing COTS database
products rather than depending on the traditional custom design
products to provide continuing MLS support.
We advocate fragmentation as a good basis for implementing
multilevel security in the new approach because it is well
supported in some current COTS database management systems. We
implemented a prototype that utilises the inherent advantages of
the distribution scheme in distributed databases for controlling
access to single-level fragments; this is achieved by augmenting
the distribution module of the host distributed DBMS with MLS code
such that the clearance of the user making a request is always
compared to the classification of the node containing the
fragments referenced; requests to unauthorised nodes are simply
dropped.
The prototype we implemented was used to instrument a series of
experiments to determine the relative performance of the tuple,
attribute, and element level fragmentation schemes. Our
experiments measured the impact on the front-end and the network
when various properties of each scheme, such as the number of
tuples, attributes, security levels, and the page size, were
varied for a Selection and Join query. We were particularly
interested in the relationship between performance degradation and
changes in the quantity of these properties. The performance of
each scheme was measured in terms of its response time.
The response times for the element level fragmentation scheme
increased as the numbers of tuples, attributes, security levels,
and the page size were increased, more significantly so than when
the number of tuples and attributes were increased. The response
times for the attribute level fragmentation scheme was the
fastest, suggesting that the performance of the attribute level
scheme is superior to the tuple and element level fragmentation
schemes. In the context of assurance, this research has also shown
that the distribution of fragments based on security level is a
more natural approach to implementing security in MLS/DBMS
systems, because a multilevel database is analogous to a
distributed database based on security level.
Overall, our study finds that the attribute level fragmentation
scheme demonstrates better performance than the tuple and element
level schemes. The response times (and hence the performance) of
the element level fragmentation scheme exhibited the worst
performance degradation compared to the tuple and attribute level
schemes
NASA space station automation: AI-based technology review
Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures
Decentralized information flow control for databases
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 177-194).Privacy and integrity concerns have been mounting in recent years as sensitive data such as medical records, social network records, and corporate and government secrets are increasingly being stored in online systems. The rate of high-profile breaches has illustrated that current techniques are inadequate for protecting sensitive information. Many of these breaches involve databases that handle information for a multitude of individuals, but databases don't provide practical tools to protect those individuals from each other, so that task is relegated to the application. This dissertation describes a system that improves security in a principled way by extending the database system and the application platform to support information flow control. Information flow control has been gaining traction as a practical way to protect information in the contexts of programming languages and operating systems. Recent research advocates the decentralized model for information flow control (DIFC), since it provides the necessary expressiveness to protect data for many individuals with varied security concerns.However, despite the fact that most applications implicated in breaches rely on relational databases, there have been no prior comprehensive attempts to extend DIFC to a database system. This dissertation introduces IFDB, which is a database management system that supports DIFC with minimal overhead. IFDB pioneers the Query by Label model, which provides applications with a simple way to delineate constraints on the confidentiality and integrity of the data they obtain from the database. This dissertation also defines new abstractions for managing information flows in a database and proposes new ways to address covert channels. Finally, the IFDB implementation and case studies with real applications demonstrate that database support for DIFC improves security, is easy for developers to use, and has good performance.by David Andrew Schultz.Ph.D
‘Enhanced Encryption and Fine-Grained Authorization for Database Systems
The aim of this research is to enhance fine-grained authorization and encryption
so that database systems are equipped with the controls necessary to help
enterprises adhere to zero-trust security more effectively. For fine-grained
authorization, this thesis has extended database systems with three new
concepts: Row permissions, column masks and trusted contexts. Row
permissions and column masks provide data-centric security so the security
policy cannot be bypassed as with database views, for example. They also
coexist in harmony with the rest of the database core tenets so that enterprises
are not forced to compromise neither security nor database functionality. Trusted
contexts provide applications in multitiered environments with a secure and
controlled manner to propagate user identities to the database and therefore
enable such applications to delegate the security policy to the database system
where it is enforced more effectively. Trusted contexts also protect against
application bypass so the application credentials cannot be abused to make
database changes outside the scope of the application’s business logic. For
encryption, this thesis has introduced a holistic database encryption solution to
address the limitations of traditional database encryption methods. It too coexists
in harmony with the rest of the database core tenets so that enterprises are not
forced to choose between security and performance as with column encryption,
for example. Lastly, row permissions, column masks, trusted contexts and holistic
database encryption have all been implemented IBM DB2, where they are relied
upon by thousands of organizations from around the world to protect critical data
and adhere to zero-trust security more effectively
The United States Marine Corps Data Collaboration Requirements: Retrieving and Integrating Data From Multiple Databases
The goal of this research is to develop an information sharing and database integration model and suggest a framework to fully satisfy the United States Marine Corps collaboration requirements as well as its information sharing and database integration needs. This research is exploratory; it focuses on only one initiative: the IT-21 initiative. The IT-21 initiative dictates The Technology for the United States Navy and Marine Corps, 2000-2035: Becoming a 21st Century Force. The IT-21 initiative states that Navy and Marine Corps information infrastructure will be based largely on commercial systems and services, and the Department of the Navy must ensure that these systems are seamlessly integrated and that information transported over the infrastructure is protected and secure. The Delphi Technique, a qualitative method approach, was used to develop a Holistic Model and to suggest a framework for information sharing and database integration. Data was primarily collected from mid-level to senior information officers, with a focus on Chief Information Officers. In addition, an extensive literature review was conducted to gain insight about known similarities and differences in Strategic Information Management, information sharing strategies, and database integration strategies. It is hoped that the Armed Forces and the Department of Defense will benefit from future development of the information sharing and database integration Holistic Model
Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval
Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS
- …