432 research outputs found

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    Annales Mathematicae et Informaticae (44.)

    Get PDF

    A national cybersecurity management framework for developing countries

    Get PDF
    Abstract : Please refer to full text to view abstract.D.Phil. (Computer Science

    Contributions to the deadlock problem in multithreaded software applications observed as Resource Allocation Systems

    Get PDF
    Desde el punto de vista de la competencia por recursos compartidos sucesivamente reutilizables, se dice que un sistema concurrente compuesto por procesos secuenciales está en situación de bloqueo si existe en él un conjunto de procesos que están indefinidamente esperando la liberación de ciertos recursos retenidos por miembros del mismo conjunto de procesos. En sistemas razonablemente complejos o distribuidos, establecer una política de asignación de recursos que sea libre de bloqueos puede ser un problema muy difícil de resolver de forma eficiente. En este sentido, los modelos formales, y particularmente las redes de Petri, se han ido afianzando como herramientas fructíferas que permiten abstraer el problema de asignación de recursos en este tipo de sistemas, con el fin de abordarlo analíticamente y proveer métodos eficientes para la correcta construcción o corrección de estos sistemas. En particular, la teoría estructural de redes de Petri se postula como un potente aliado para lidiar con el problema de la explosión de estados inherente a aquéllos. En este fértil contexto han florecido una serie de trabajos que defienden una propuesta metodológica de diseño orientada al estudio estructural y la correspondiente corrección física del problema de asignación de recursos en familias de sistemas muy significativas en determinados contextos de aplicación, como el de los Sistemas de Fabricación Flexible. Las clases de modelos de redes de Petri resultantes asumen ciertas restricciones, con significado físico en el contexto de aplicación para el que están destinadas, que alivian en buena medida la complejidad del problema. En la presente tesis, se intenta acercar ese tipo de aproximación metodológica al diseño de aplicaciones software multihilo libres de bloqueos. A tal efecto, se pone de manifiesto cómo aquellas restricciones procedentes del mundo de los Sistemas de Fabricación Flexible se muestran demasiado severas para aprehender la versatilidad inherente a los sistemas software en lo que respecta a la interacción de los procesos con los recursos compartidos. En particular, se han de resaltar dos necesidades de modelado fundamentales que obstaculizan la mera adopción de antiguas aproximaciones surgidas bajo el prisma de otros dominios: (1) la necesidad de soportar el anidamiento de bucles no desplegables en el interior de los procesos, y (2) la posible compartición de recursos no disponibles en el arranque del sistema pero que son creados o declarados por un proceso en ejecución. A resultas, se identifica una serie de requerimientos básicos para la definición de un tipo de modelos orientado al estudio de sistemas software multihilo y se presenta una clase de redes de Petri, llamada PC2R, que cumple dicha lista de requerimientos, manteniéndose a su vez respetuosa con la filosofía de diseño de anteriores subclases enfocadas a otros contextos de aplicación. Junto con la revisión e integración de anteriores resultados en el nuevo marco conceptual, se aborda el estudio de propiedades inherentes a los sistemas resultantes y su relación profunda con otros tipos de modelos, la confección de resultados y algoritmos eficientes para el análisis estructural de vivacidad en la nueva clase, así como la revisión y propuesta de métodos de resolución de los problemas de bloqueo adaptadas a las particularidades físicas del dominio de aplicación. Asimismo, se estudia la complejidad computacional de ciertas vertientes relacionadas con el problema de asignación de recursos en el nuevo contexto, así como la traslación de los resultados anteriormente mencionados sobre el dominio de la ingeniería de software multihilo, donde la nueva clase de redes permite afrontar problemas inabordables considerando el marco teórico y las herramientas suministradas para subclases anteriormente explotadas

    Revelation of Yin-Yang Balance in Microbial Cell Factories by Data Mining, Flux Modeling, and Metabolic Engineering

    Get PDF
    The long-held assumption of never-ending rapid growth in biotechnology and especially in synthetic biology has been recently questioned, due to lack of substantial return of investment. One of the main reasons for failures in synthetic biology and metabolic engineering is the metabolic burdens that result in resource losses. Metabolic burden is defined as the portion of a host cells resources either energy molecules (e.g., NADH, NADPH and ATP) or carbon building blocks (e.g., amino acids) that is used to maintain the engineered components (e.g., pathways). As a result, the effectiveness of synthetic biology tools heavily dependents on cell capability to carry on the metabolic burden. Although genetic modifications can effectively engineer cells and redirect carbon fluxes toward diverse products, insufficient cell ATP powerhouse is limited to support diverse microbial activities including product synthesis. Here, I employ an ancient Chinese philosophy (Yin-Yang) to describe two contrary forces that are interconnected and interdependent, where Yin represents energy metabolism in the form of ATP, and Yang represents carbon metabolism. To decipher Yin-Yang balance and its implication to microbial cell factories, this dissertation applied metabolic engineering, flux analysis, data mining tools to reveal cell physiological responses under different genetic and environmental conditions. Firstly, a combined approach of FBA and 13C-MFA was employed to investigate several engineered isobutanol-producing strains and examine their carbon and energy metabolism. The result indicated isobutanol overproduction strongly competed for biomass building blocks and thus the addition of nutrients (yeast extract) to support cell growth is essential for high yield of isobutanol. Based on the analysis of isobutanol production, \u27Yin-Yang\u27 theory has been proposed to illustrate the importance of carbon and energy balance in engineered strains. The effects of metabolic burden and respiration efficiency (P/O ratio) on biofuel product were determined by FBA simulation. The discovery of energy cliff explained failures in bioprocess scale-ups. The simulation also predicted that fatty acid production is more sensitive to P/O ratio change than alcohol production. Based on that prediction, fatty acid producing strains have been engineered with the insertion of Vitreoscilla hemoglobin (VHb), to overcome the intracellular energy limitation by improving its oxygen uptake and respiration efficiency. The result confirmed our hypothesis and different level of trade-off between the burden and the benefit from various introduced genetic components. On the other side, a series of computational tools have been developed to accelerate the application of fluxomics research. Microbesflux has been rebuilt, upgraded, and moved to a commercial server. A platform for fluxomics study as well as an open source 13C-MFA tool (WUFlux) has been developed. Further, a computational platform that integrates machine learning, logic programming, and constrained programming together has been developed. This platform gives fast predictions of microbial central metabolism with decent accuracy. Lastly, a framework has been built to integrate Big Data technology and text mining to interpret concepts and technology trends based on the literature survey. Case studies have been performed, and informative results have been obtained through this Big Data framework within five minutes. In summary, 13C-MFA and flux balance analysis are only tools to quantify cell energy and carbon metabolism (i.e., Yin-Yang Balance), leading to the rational design of robust high-producing microbial cell factories. Developing advanced computational tools will facilitate the application of fluxomics research and literature analysis

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Navigating agency problems in corporate law: A Comparative study through the lens of law and economics

    Get PDF
    This interdisciplinary research explores agency problems within corporate law, through the lens of comparative law and law and economics. By expanding these distinctive yet complementary perspectives, this complex entity can be more easily understood. Here the approach departs from traditional company law strategies and instead unites an economic evaluation of legal norms with a comparative examination of various jurisdictions, thus offering a novel view on how to better solve agency problems. To discern and combine effective legal strategies from diverse jurisdictions, this research also capitalises on comparative law methodology. Based on the strengths of different legal cultures it creates a framework, grounded in law and economics - tertium comparationis -, that can be used to assess and address the second and third agency problems. After analysing the second agency problem, this research proposes the integration of shareholder costs within transaction cost theory. It recommends enhancing cost-efficiency in corporate governance, facilitated by bolstering fiduciary duties, promoting transparency, and endorsing proactive dispute resolution. A proposal emerging from this research introduces a mechanism for prosocial investors to voice their interests in the company, safeguard economic rights and empower minority shareholders. In addition to the second agency problem, the third agency problem is explored, redefining a company’s purpose to mirror public interest and balance socio-economic prosperity with environmental sustainability. The research introduces stakeholder costs in transaction cost theory, underlining the long-term value increase of incorporating stakeholder rights as monitoring costs. Here, emphasis is placed on the importance of aligning businesses with the provisional nine planetary boundaries and establishing a robust social foundation, as inspired by the Doughnut Economics model. The research suggests regulatory mechanisms to categorise businesses based on their environmental impact and advocates for all companies, irrespective of their size or sector, to adhere to high sustainability standards. In conclusion, the research combines comparative law methodology with law and economics, and proposes legal strategies that address agency problems, promote efficiency, and advocate for environmental sustainability. It exemplifies the potential of this combined approach to reshaping the corporate landscape to better reflect public interest while upholding the principles of the Doughnut Economics model

    Unmanned Aircraft Systems in the Cyber Domain

    Get PDF
    Unmanned Aircraft Systems are an integral part of the US national critical infrastructure. The authors have endeavored to bring a breadth and quality of information to the reader that is unparalleled in the unclassified sphere. This textbook will fully immerse and engage the reader / student in the cyber-security considerations of this rapidly emerging technology that we know as unmanned aircraft systems (UAS). The first edition topics covered National Airspace (NAS) policy issues, information security (INFOSEC), UAS vulnerabilities in key systems (Sense and Avoid / SCADA), navigation and collision avoidance systems, stealth design, intelligence, surveillance and reconnaissance (ISR) platforms; weapons systems security; electronic warfare considerations; data-links, jamming, operational vulnerabilities and still-emerging political scenarios that affect US military / commercial decisions. This second edition discusses state-of-the-art technology issues facing US UAS designers. It focuses on counter unmanned aircraft systems (C-UAS) – especially research designed to mitigate and terminate threats by SWARMS. Topics include high-altitude platforms (HAPS) for wireless communications; C-UAS and large scale threats; acoustic countermeasures against SWARMS and building an Identify Friend or Foe (IFF) acoustic library; updates to the legal / regulatory landscape; UAS proliferation along the Chinese New Silk Road Sea / Land routes; and ethics in this new age of autonomous systems and artificial intelligence (AI).https://newprairiepress.org/ebooks/1027/thumbnail.jp
    corecore