6 research outputs found

    鈥楨nhanced Encryption and Fine-Grained Authorization for Database Systems

    Get PDF
    The aim of this research is to enhance fine-grained authorization and encryption so that database systems are equipped with the controls necessary to help enterprises adhere to zero-trust security more effectively. For fine-grained authorization, this thesis has extended database systems with three new concepts: Row permissions, column masks and trusted contexts. Row permissions and column masks provide data-centric security so the security policy cannot be bypassed as with database views, for example. They also coexist in harmony with the rest of the database core tenets so that enterprises are not forced to compromise neither security nor database functionality. Trusted contexts provide applications in multitiered environments with a secure and controlled manner to propagate user identities to the database and therefore enable such applications to delegate the security policy to the database system where it is enforced more effectively. Trusted contexts also protect against application bypass so the application credentials cannot be abused to make database changes outside the scope of the application鈥檚 business logic. For encryption, this thesis has introduced a holistic database encryption solution to address the limitations of traditional database encryption methods. It too coexists in harmony with the rest of the database core tenets so that enterprises are not forced to choose between security and performance as with column encryption, for example. Lastly, row permissions, column masks, trusted contexts and holistic database encryption have all been implemented IBM DB2, where they are relied upon by thousands of organizations from around the world to protect critical data and adhere to zero-trust security more effectively

    Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval

    Get PDF
    Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS

    User-Optimizer Communication using Abstract Plans in Sybase ASE

    No full text
    Query optimizers are error prone, due to both their nature and the increased search space that modern query processing requires them to manage. This paper introduces the Sybase Plan (AP) language, a novel technology that puts together a set of proven techniques to palliate optimizer mistaken decisions. The AP language is a 2-way user-optimizer communication mechanism based on a physical level relational algebra. AP expressions are used both by the optimizer to describe the plan that it selected and by the user to direct the optimizer choices. APs are not textually part of the query. They are persistent objects stored in the system catalogs. APs yield important performance gains by eliminating all optimizer errors. 1

    Flexibility in Data Management

    Get PDF
    With the ongoing expansion of information technology, new fields of application requiring data management emerge virtually every day. In our knowledge culture increasing amounts of data and work force organized in more creativity-oriented ways also radically change traditional fields of application and question established assumptions about data management. For instance, investigative analytics and agile software development move towards a very agile and flexible handling of data. As the primary facilitators of data management, database systems have to reflect and support these developments. However, traditional database management technology, in particular relational database systems, is built on assumptions of relatively stable application domains. The need to model all data up front in a prescriptive database schema earned relational database management systems the reputation among developers of being inflexible, dated, and cumbersome to work with. Nevertheless, relational systems still dominate the database market. They are a proven, standardized, and interoperable technology, well-known in IT departments with a work force of experienced and trained developers and administrators. This thesis aims at resolving the growing contradiction between the popularity and omnipresence of relational systems in companies and their increasingly bad reputation among developers. It adapts relational database technology towards more agility and flexibility. We envision a descriptive schema-comes-second relational database system, which is entity-oriented instead of schema-oriented; descriptive rather than prescriptive. The thesis provides four main contributions: (1)~a flexible relational data model, which frees relational data management from having a prescriptive schema; (2)~autonomous physical entity domains, which partition self-descriptive data according to their schema properties for better query performance; (3)~a freely adjustable storage engine, which allows adapting the physical data layout used to properties of the data and of the workload; and (4)~a self-managed indexing infrastructure, which autonomously collects and adapts index information under the presence of dynamic workloads and evolving schemas. The flexible relational data model is the thesis\' central contribution. It describes the functional appearance of the descriptive schema-comes-second relational database system. The other three contributions improve components in the architecture of database management systems to increase the query performance and the manageability of descriptive schema-comes-second relational database systems. We are confident that these four contributions can help paving the way to a more flexible future for relational database management technology

    Life Sciences Program Tasks and Bibliography

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1995. Additionally, this inaugural edition of the Task Book includes information for FY 1994 programs. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web pag

    Metodolog铆a de implantaci贸n de modelos de gesti贸n de la informaci贸n dentro de los sistemas de planificaci贸n de recursos empresariales. Aplicaci贸n en la peque帽a y mediana empresa

    Get PDF
    La Siguiente Generaci贸n de Sistemas de Fabricaci贸n (SGSF) trata de dar respuesta a los requerimientos de los nuevos modelos de empresas, en contextos de inteligencia, agilidad y adaptabilidad en un entono global y virtual. La Planificaci贸n de Recursos Empresariales (ERP) con soportes de gesti贸n del producto (PDM) y el ciclo de vida del producto (PLM) proporciona soluciones de gesti贸n empresarial sobre la base de un uso coherente de tecnolog铆as de la informaci贸n para la implantaci贸n en sistemas CIM (Computer-Integrated Manufacturing), con un alto grado de adaptabilidad a la estnictura organizativa deseada. En general, esta implementaci贸n se lleva desarrollando hace tiempo en grandes empresas, siendo menor (casi nula) su extensi贸n a PYMEs. La presente Tesis Doctoral, define y desarrolla una nueva metodolog铆a de implementaci贸n pan la generaci贸n autom谩tica de la informaci贸n en los procesos de negocio que se verifican en empresas con requerimientos adaptados a las necesidades de la SGSF, dentro de los sistemas de gesti贸n de los recursos empresariales (ERP), atendiendo a la influencia del factor humano. La validez del modelo te贸rico de la metodolog铆a mencionada se ha comprobado al implementarlo en una empresa del tipo PYME, del sector de Ingenier铆a. Para el establecimiento del Estado del Arte de este tema se ha dise帽ado y aplicado una metodolog铆a espec铆fica basada en el ciclo de mejora continua de Shewhart/Deming, aplicando las herramientas de b煤squeda y an谩lisis bibliogr谩fico disponibles en la red con acceso a las correspondientes bases de datos
    corecore