76 research outputs found
Self-tuning Schedulers for Legacy Real-Time Applications
We present an approach for adaptive scheduling of soft real-time legacy applications (for which no timing information is exposed to the system). Our strategy is based on the combination of two techniques: 1) a real-time monitor that observes the sequence of events generated by the application to infer its activation period, 2) a feedback mechanism that adapts the scheduling parameters to ensure a timely execution of the application. By a thorough experimental evaluation of an implementation of our approach, we show its performance and its efficiency
Asynchronous replication of metadata across multi-master servers in distributed data storage systems
In recent years, scientific applications have become increasingly data intensive. The increase in the size of data generated by scientific applications necessitates collaboration and sharing data among the nation\u27s education and research institutions. To address this, distributed storage systems spanning multiple institutions over wide area networks have been developed. One of the important features of distributed storage systems is providing global unified name space across all participating institutions, which enables easy data sharing without the knowledge of actual physical location of data. This feature depends on the ``location metadata\u27\u27 of all data sets in the system being available to all participating institutions. This introduces new challenges. In this thesis, we study different metadata server layouts in terms of high availability, scalability and performance. A central metadata server is a single point of failure leading to low availability. Ensuring high availability requires replication of metadata servers. A synchronously replicated metadata servers layout introduces synchronization overhead which degrades the performance of data operations. We propose an asynchronously replicated multi-master metadata servers layout which ensures high availability, scalability and provides better performance. We discuss the implications of asynchronously replicated multi-master metadata servers on metadata consistency and conflict resolution. Further, we design and implement our own asynchronous multi-master replication tool, deploy it in the state-wide distributed data storage system called PetaShare, and compare performance of all three metadata server layouts: central metadata server, synchronously replicated multi-master metadata servers and asynchronously replicated multi-master metadata servers
A Decentralized Dynamic PKI based on Blockchain
The central role of the certificate authority (CA) in traditional public key infrastructure (PKI) makes it fragile and prone to compromises and operational failures. Maintaining CAs and revocation lists is demanding especially in loosely-connected and large systems. Log-based PKIs have been proposed as a remedy but they do not solve the problem effectively. We provide a general model and a solution for decentralized and dynamic PKI based on a blockchain and web of trust model where the traditional CA and digital certificates are removed and instead, everything is registered on the blockchain. Registration, revocation, and update of public keys are based on a consensus mechanism between a certain number of entities that are already part of the system. Any node which is part of the system can be an auditor and initiate the revocation procedure once it finds out malicious activities. Revocation lists are no longer required as any node can efficiently verify the public keys through witnesses
Mysticeti: Low-Latency DAG Consensus with Fast Commit Path
We introduce Mysticeti-C a byzantine consensus protocol with low-latency and
high resource efficiency. It leverages a DAG based on Threshold Clocks and
incorporates innovations in pipelining and multiple leaders to reduce latency
in the steady state and under crash failures. Mysticeti-FPC incorporates a fast
commit path that has even lower latency. We prove the safety and liveness of
the protocols in a byzantine context. We evaluate Mysticeti and compare it with
state-of-the-art consensus and fast path protocols to demonstrate its low
latency and resource efficiency, as well as more graceful degradation under
crash failures. Mysticeti is the first byzantine protocol to achieve WAN
latency of 0.5s for consensus commit, at a throughput of over 50k TPS that
matches the state-of-the-art
The Easiest Way of Turning your Relational Database into a Blockchain -- and the Cost of Doing So
Blockchain systems essentially consist of two levels: The network level has
the responsibility of distributing an ordered stream of transactions to all
nodes of the network in exactly the same way, even in the presence of a certain
amount of malicious parties (byzantine fault tolerance). On the node level,
each node then receives this ordered stream of transactions and executes it
within some sort of transaction processing system, typically to alter some kind
of state. This clear separation into two levels as well as drastically
different application requirements have led to the materialization of the
network level in form of so-called blockchain frameworks. While providing all
the "blockchain features", these frameworks leave the node level backend
flexible or even left to be implemented depending on the specific needs of the
application.
In the following paper, we present how to integrate a highly versatile
transaction processing system, namely a relational DBMS, into such a blockchain
framework. As framework, we use the popular Tendermint Core, now part of the
Ignite/Cosmos eco-system, which can run both public and permissioned networks
and combine it with relational DBMSs as the backend. This results in a
"relational blockchain", which is able to run deterministic SQL on a fully
replicated relational database. Apart from presenting the integration and its
pitfalls, we will carefully evaluate the performance implications of such
combinations, in particular, the throughput and latency overhead caused by the
blockchain layer on top of the DBMS. As a result, we give recommendations on
how to run such a systems combination efficiently in practice
FINE-GRAINED ACCESS CONTROL ON ANDROID COMPONENT
The pervasiveness of Android devices in today’s interconnected world emphasizes the importance of mobile security in protecting user privacy and digital assets. Android’s current security model primarily enforces application-level mechanisms, which fail to address component-level (e.g., Activity, Service, and Content Provider) security concerns. Consequently, third-party code may exploit an application’s permissions, and security features like MDM or BYOD face limitations in their implementation. To address these concerns, we propose a novel Android component context-aware access control mechanism that enforces layered security at multiple Exception Levels (ELs), including EL0, EL1, and EL3. This approach effectively restricts component privileges and controls resource access as needed. Our solution comprises Flasa at EL0, extending SELinux policies for inter-component interactions and SQLite content control; Compac, spanning EL0 and EL1, which enforces component-level permission controls through Android runtime and kernel modifications; and TzNfc, leveraging TrustZone technologies to secure third-party services and limit system privileges via Trusted Execution Environment (TEE). Our evaluations demonstrate the effectiveness of our proposed solution in containing component privileges, controlling inter-component interactions and protecting component level resource access. This enhanced solution, complementing Android’s existing security architecture, provides a more comprehensive approach to Android security, benefiting users, developers, and the broader mobile ecosystem
- …