789 research outputs found

    TANDEM: taming failures in next-generation datacenters with emerging memory

    Get PDF
    The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions. Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice. Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for implementing recovery, complicating correctness and programmability. Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges. When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability). This thesis aims to address these challenges through the following approaches: (a) defining precise consistency models that formally specify correct end-to-end semantics in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery. We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop- ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism, lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance. We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes). Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services. Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating any intervention from the programmers

    2023-2024 Catalog

    Get PDF
    The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Efficient Security Protocols for Constrained Devices

    Get PDF
    During the last decades, more and more devices have been connected to the Internet.Today, there are more devices connected to the Internet than humans.An increasingly more common type of devices are cyber-physical devices.A device that interacts with its environment is called a cyber-physical device.Sensors that measure their environment and actuators that alter the physical environment are both cyber-physical devices.Devices connected to the Internet risk being compromised by threat actors such as hackers.Cyber-physical devices have become a preferred target for threat actors since the consequence of an intrusion disrupting or destroying a cyber-physical system can be severe.Cyber attacks against power and energy infrastructure have caused significant disruptions in recent years.Many cyber-physical devices are categorized as constrained devices.A constrained device is characterized by one or more of the following limitations: limited memory, a less powerful CPU, or a limited communication interface.Many constrained devices are also powered by a battery or energy harvesting, which limits the available energy budget.Devices must be efficient to make the most of the limited resources.Mitigating cyber attacks is a complex task, requiring technical and organizational measures.Constrained cyber-physical devices require efficient security mechanisms to avoid overloading the systems limited resources.In this thesis, we present research on efficient security protocols for constrained cyber-physical devices.We have implemented and evaluated two state-of-the-art protocols, OSCORE and Group OSCORE.These protocols allow end-to-end protection of CoAP messages in the presence of untrusted proxies.Next, we have performed a formal protocol verification of WirelessHART, a protocol for communications in an industrial control systems setting.In our work, we present a novel attack against the protocol.We have developed a novel architecture for industrial control systems utilizing the Digital Twin concept.Using a state synchronization protocol, we propagate state changes between the digital and physical twins.The Digital Twin can then monitor and manage devices.We have also designed a protocol for secure ownership transfer of constrained wireless devices. Our protocol allows the owner of a wireless sensor network to transfer control of the devices to a new owner.With a formal protocol verification, we can guarantee the security of both the old and new owners.Lastly, we have developed an efficient Private Stream Aggregation (PSA) protocol.PSA allows devices to send encrypted measurements to an aggregator.The aggregator can combine the encrypted measurements and calculate the decrypted sum of the measurements.No party will learn the measurement except the device that generated it

    Digital Twins and Blockchain for IoT Management

    Get PDF
    We live in a data-driven world powered by sensors getting data from anywhere at any time. This advancement is possible thanks to the Internet of Things (IoT). IoT embeds common physical objects with heterogeneous sensing, actuating, and communication capabilities to collect data from the environment and people. These objects are generally known as things and exchange data with other things, entities, computational processes, and systems over the internet. Consequently, a web of devices and computational processes emerges involving billions of entities collecting, processing, and sharing data. As a result, we now have an internet of entities/things that process and produce data, an ever-growing volume that can easily exceed petabytes. Therefore, there is a need for novel management approaches to handle the previously unheard number of IoT devices, processes, and data streams. This dissertation focuses on solutions for IoT management using decentralized technologies. A massive number of IoT devices interact with software and hardware components and are owned by different people. Therefore, there is a need for decentralized management. Blockchain is a capable and promising distributed ledger technology with features to support decentralized systems with large numbers of devices. People should not have to interact with these devices or data streams directly. Therefore, there is a need to abstract access to these components. Digital twins are software artifacts that can abstract an object, a process, or a system to enable communication between the physical and digital worlds. Fog/edge computing is the alternative to the cloud to provide services with less latency. This research uses blockchain technology, digital twins, and fog/edge computing for IoT management. The systems developed in this dissertation enable configuration, self-management, zero-trust management, and data streaming view provisioning from a fog/edge layer. In this way, this massive number of things and the data they produce are managed through services distributed across nodes close to them, providing access and configuration security and privacy protection

    Multi-site multi-synchronous support for SQLite augmented for Local- First Software

    Get PDF
    At the one end of the spectrum, locally installed applications allow users to work offline and at their convenience but do not support collaboration among multiple users or devices. At the other end, cloud-based applications offer collaboration and data synchronization but require an uninterrupted internet connection. To address these limitations, Local-First software was introduced to merge the advantages of both approaches. This allows users to use applications offline and synchronize data with peers when they are back online. This approach offers an appealing solution for users requiring both offline capabilities and collaboration [1]. Conflict-free Replicated Data Types (CRDTs) have emerged as a promising technology for implementing Local-First software since they address the challenge of replica convergence and multi-synchronous data access [1]. Basically, they are abstract data types with well-defined interfaces that can be replicated across multiple sites and allow a replica to be modified without coordination with other replicas [2]. The replicas converge to the same state when they receive and apply the same set of updates. If CRDTs are integrated with relational databases (RDBs), they can facilitate multi-synchronous data access and extend RDBs with Local-First characteristics. This integration process is known as Conflict-free Replicated Relations (CRRs) [3]. SynQLite is an implementation that augments SQLite databases with CRR to make them Local-First software [4]. It aims to extend existing SQLite-based applications with multi-synchronous access, enabling offline usage and seamless collaboration across multiple users and devices. However, its existing implementation has a critical limitation in its ability to support multiple sites. It cannot manage states when multiple replicas or sites are frequently connected and disconnected, which is a vital requirement for real-world applications and presents a challenge that must be addressed for SynQLite to be more widely adopted. The goal of this Master’s thesis is to extend the existing version of SynQLite by adding multi-site multi-synchronous support. This was achieved by implementing new features and resolving existing bugs on top of the existing codebase. Our implementation was rigorously evaluated through various tests and experiments, and the results demonstrate that we successfully achieved our goal

    Modular Collaborative Program Analysis

    Get PDF
    With our world increasingly relying on computers, it is important to ensure the quality, correctness, security, and performance of software systems. Static analysis that computes properties of computer programs without executing them has been an important method to achieve this for decades. However, static analysis faces major chal- lenges in increasingly complex programming languages and software systems and increasing and sometimes conflicting demands for soundness, precision, and scalability. In order to cope with these challenges, it is necessary to build static analyses for complex problems from small, independent, yet collaborating modules that can be developed in isolation and combined in a plug-and-play manner. So far, no generic architecture to implement and combine a broad range of dissimilar static analyses exists. The goal of this thesis is thus to design such an architecture and implement it as a generic framework for developing modular, collaborative static analyses. We use several, diverse case-study analyses from which we systematically derive requirements to guide the design of the framework. Based on this, we propose the use of a blackboard-architecture style collaboration of analyses that we implement in the OPAL framework. We also develop a formal model of our architectures core concepts and show how it enables freely composing analyses while retaining their soundness guarantees. We showcase and evaluate our architecture using the case-study analyses, each of which shows how important and complex problems of static analysis can be addressed using a modular, collaborative implementation style. In particular, we show how a modular architecture for the construction of call graphs ensures consistent soundness of different algorithms. We show how modular analyses for different aspects of immutability mutually benefit each other. Finally, we show how the analysis of method purity can benefit from the use of other complex analyses in a collaborative manner and from exchanging different analysis implementations that exhibit different characteristics. Each of these case studies improves over the respective state of the art in terms of soundness, precision, and/or scalability and shows how our architecture enables experimenting with and fine-tuning trade-offs between these qualities

    D4.2 Intelligent D-Band wireless systems and networks initial designs

    Get PDF
    This deliverable gives the results of the ARIADNE project's Task 4.2: Machine Learning based network intelligence. It presents the work conducted on various aspects of network management to deliver system level, qualitative solutions that leverage diverse machine learning techniques. The different chapters present system level, simulation and algorithmic models based on multi-agent reinforcement learning, deep reinforcement learning, learning automata for complex event forecasting, system level model for proactive handovers and resource allocation, model-driven deep learning-based channel estimation and feedbacks as well as strategies for deployment of machine learning based solutions. In short, the D4.2 provides results on promising AI and ML based methods along with their limitations and potentials that have been investigated in the ARIADNE project

    Development of traceability solution for furniture components

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIn the contemporary context, characterized by intensified global competition and the constant evolution of the globalization landscape, it becomes imperative for industries, including Small and Medium Enterprises (SMEs), to undertake efforts to enhance their operational processes, often through digital technological adaptation. The present study falls within the scope of the project named “Wood Work 4.0,” which aims to infuse innovation into the wood furniture manufacturing industry through process optimization and the adoption of digital technologies. This project received funding from the European Union Development Fund, in collaboration with the North 2020 Regional Program, and was carried out at the Carpintaria Mofreita company, located in Macedo de Cavaleiros, Portugal. In this regard, this study introduces a software architecture that supports the traceability of projects in the wood furniture industry and simultaneously employs a system to identify and manage material leftovers, aiming for more efficient waste management. For the development of this software architecture, an approach that integrates the Fiware platform, specialized in systems for the Internet of Things (IoT), with an Application Programming Interface (API) specifically created to manage information about users, projects, and associated media files, was adopted. The material leftovers identification system employs image processing techniques to extract geometric characteristics of the materials. Additionally, these data are integrated into the company’s database. In this way, it was possible to develop an architecture that allows not only the capturing of project information but also its effective management. In the case of material leftovers identification, the system was able to establish, with a satisfactory degree of accuracy, the dimensions of the materials, enabling the insertion of these data into the company’s database for resource management and optimization.No contexto contemporâneo, marcado por uma competição global intensificada e pela constante evolução do cenário de globalização, torna-se imperativo para as indústrias, incluindo as Pequenas e Médias Empresas (PMEs), empreender esforços para aprimorar seus processos operacionais, frequentemente pela via da adaptação tecnológica digital. O presente estudo insere-se dentro do escopo do projeto denominado “Wood Work 4.0”, cujo propósito é infundir inovação na indústria de fabricação de móveis de madeira por meio da otimização de processos e da adoção de tecnologias digitais. Este projeto obteve financiamento do Fundo de Desenvolvimento da União Europeia, em colaboração com o programa Regional do Norte 2020 e foi realizado na empresa Carpintaria Mofreita, localizada em Macedo de Cavaleiros, Portugal. Nesse sentido, este estudo introduz uma arquitetura de software que oferece suporte à rastreabilidade de projetos na indústria de móveis de madeira, e simultaneamente emprega um sistema para identificar e gerenciar sobras de material, objetivando uma gestão de resíduos mais eficiente. Para o desenvolvimento dessa arquitetura de software, adotou-se uma abordagem que integra a plataforma Fiware, especializada em sistemas para a Internet das Coisas (IoT), com uma Interface de Programação de Aplicações (API) criada especificamente para gerenciar informações de usuários, projetos, e arquivos de mídia associados. O sistema de identificação de sobras de material emprega técnicas de processamento de imagem para extrair características geométricas dos materiais. Adicionalmente, esses dados são integrados ao banco de dados da empresa. Desta forma, foi possível desenvolver uma arquitetura que permite não só capturar informações de projetos, mas também gerenciá-las de forma eficaz. No caso da identificação de sobras de material, o sistema foi capaz de estabelecer, com um grau de precisão satisfatório, as dimensões dos materiais, possibilitando a inserção desses dados no banco de dados da empresa para gestão e otimização do uso de recursos
    corecore