31 research outputs found

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Challenges in Cybersecurity and Privacy - the European Research Landscape

    Get PDF
    Cybersecurity and Privacy issues are becoming an important barrier for a trusted and dependable global digital society development. Cyber-criminals are continuously shifting their cyber-attacks specially against cyber-physical systems and IoT, since they present additional vulnerabilities due to their constrained capabilities, their unattended nature and the usage of potential untrustworthiness components. Likewise, identity-theft, fraud, personal data leakages, and other related cyber-crimes are continuously evolving, causing important damages and privacy problems for European citizens in both virtual and physical scenarios. In this context, new holistic approaches, methodologies, techniques and tools are needed to cope with those issues, and mitigate cyberattacks, by employing novel cyber-situational awareness frameworks, risk analysis and modeling, threat intelligent systems, cyber-threat information sharing methods, advanced big-data analysis techniques as well as exploiting the benefits from latest technologies such as SDN/NFV and Cloud systems. In addition, novel privacy-preserving techniques, and crypto-privacy mechanisms, identity and eID management systems, trust services, and recommendations are needed to protect citizens’ privacy while keeping usability levels. The European Commission is addressing the challenge through different means, including the Horizon 2020 Research and Innovation program, thereby financing innovative projects that can cope with the increasing cyberthreat landscape. This book introduces several cybersecurity and privacy research challenges and how they are being addressed in the scope of 15 European research projects. Each chapter is dedicated to a different funded European Research project, which aims to cope with digital security and privacy aspects, risks, threats and cybersecurity issues from a different perspective. Each chapter includes the project’s overviews and objectives, the particular challenges they are covering, research achievements on security and privacy, as well as the techniques, outcomes, and evaluations accomplished in the scope of the EU project. The book is the result of a collaborative effort among relative ongoing European Research projects in the field of privacy and security as well as related cybersecurity fields, and it is intended to explain how these projects meet the main cybersecurity and privacy challenges faced in Europe. Namely, the EU projects analyzed in the book are: ANASTACIA, SAINT, YAKSHA, FORTIKA, CYBECO, SISSDEN, CIPSEC, CS-AWARE. RED-Alert, Truessec.eu. ARIES, LIGHTest, CREDENTIAL, FutureTrust, LEPS. Challenges in Cybersecurity and Privacy - the European Research Landscape is ideal for personnel in computer/communication industries as well as academic staff and master/research students in computer science and communications networks interested in learning about cyber-security and privacy aspects

    Scaling Distributed Ledgers and Privacy-Preserving Applications

    Get PDF
    This thesis proposes techniques aiming to make blockchain technologies and smart contract platforms practical by improving their scalability, latency, and privacy. This thesis starts by presenting the design and implementation of Chainspace, a distributed ledger that supports user defined smart contracts and execute user-supplied transactions on their objects. The correct execution of smart contract transactions is publicly verifiable. Chainspace is scalable by sharding state; it is secure against subsets of nodes trying to compromise its integrity or availability properties through Byzantine Fault Tolerance (BFT). This thesis also introduces a family of replay attacks against sharded distributed ledgers targeting cross-shard consensus protocols; they allow an attacker, with network access only, to double-spend resources with minimal efforts. We then build Byzcuit, a new cross-shard consensus protocol that is immune to those attacks and that is tailored to run at the heart of Chainspace. Next, we propose FastPay, a high-integrity settlement system for pre-funded payments that can be used as a financial side-infrastructure for Chainspace to support low-latency retail payments. This settlement system is based on Byzantine Consistent Broadcast as its core primitive, foregoing the expenses of full atomic commit channels (consensus). The resulting system has extremely low-latency for both confirmation and payment finality. Finally, this thesis proposes Coconut, a selective disclosure credential scheme supporting distributed threshold issuance, public and private attributes, re-randomization, and multiple unlinkable selective attribute revelations. It ensures authenticity and availability even when a subset of credential issuing authorities are malicious or offline, and natively integrates with Chainspace to enable a number of scalable privacy-preserving applications

    Compositional strategies for pervasive performance

    Get PDF
    I have defined the term 'pervasive performance' to apply to an emerging form of artistic cultural production which blends aspects of theatre, site-specific art, and game play to create an immersive participatory experience. Pervasive: because the parameters for the performance extend beyond the conventional time frame for a theatre performance so that 'showtime' pervades beyond hours, extending into days, weeks, even months. Pervasive: because the performance extends from the stage or screen so that the performance arena becomes the real world of the daily lives of its audience. A central feature to pervasive performance is the overlapping (or erasure) of boundaries between media and their attendant conventions. Observers become participants, or players, and the 'play' is itself a world of play where reality blends with the fictional. A 'mixed' reality performance space is established because the performance space extends: into private homes, into the public domain of streets outside, and into the virtual world of internet hubs and social networks. The diegetic landscape of the performance as the world of the play is present in three places simultaneously: manifest reality, the hi-tech networked 'virtual' space, and the virtual playground of the imagination. Other terms for cultural practices, which overlap with this form, include 'Pervasive Gaming', 'Multimedia Interactive Theatre Experience' and 'Augmented Reality Game'. Each indicates a slight variation on the spectrum from computer game to theatre performance, though all denote a form of play, which extends into the daily lives of its participants. This extension takes place on both spatial and temporal axes - often using the pervasiveness of communicative technologies, such as mobile phones and internet hubs, to telematically transmit the performance 'text'. Using my own practice, compositional analysis, and first-hand observations of performance works by Blast Theory, this study explores the design problem inherent in a participative artwork which needs to balance both the end-participant's desire for plot-driven narrative or action, with the freedom to make autonomous choices in the world of the performance. Questioning the mutuality of these two different dramaturgical challenges, I will assess the compositional structures and implications of agency in pervasive narrative. Challenging representations and embodiments of locality and identity, the pervasive performance form operates through politically charged processes, thus contaminating discourse (Giannachi, 2007, p.49) and preventing a positivist critical analysis. This study aims to uncover these processes, the compositional structures they might inhabit and the extent to which this form can be considered interactive. Whether a dramaturgy of pervasive performance implies a process of control or whether its interactivity presents a real possibility for freedom will be explored in this thesis

    VeritasDB: High Throughput Key-Value Store with Integrity

    Get PDF
    While businesses shift their databases to the cloud, they continue to depend on them to operate correctly. Alarmingly, cloud services constantly face threats from exploits in the privileged computing layers (e.g. OS, Hypervisor) and attacks from rogue datacenter administrators, which tamper with the database\u27s storage and cause it to produce incorrect results. Although integrity verification of outsourced storage and file systems is a well-studied problem, prior techniques impose prohibitive overheads (up to 30x in throughput) and place additional responsibility on clients. We present VeritasDB, a key-value store that guarantees data integrity to the client in the presence of exploits or implementation bugs in the database server. VeritasDB is implemented as a network proxy that mediates communication between the unmodified client(s) and the unmodified database server, which can be any off-the-shelf database engine (e.g., Redis, RocksDB, Apache Cassandra). The proxy transforms each client request before forwarding it to the server and checks the correctness of the server\u27s response before forwarding it to the client. To ensure the proxy is trusted, we use the protections of modern trusted hardware platforms, such as Intel SGX, to host the proxy\u27s code and trusted state, thus completely eliminating trust on the cloud provider. To maintain high performance in VeritasDB while scaling to large databases, we design an authenticated Merkle B+-tree that leverages features of SGX (modest amount of protected RAM, direct access to large unprotected RAM, and CPU parallelism) to implement several novel optimizations based on caching, concurrency, and compression. On standard YCSB and Visa transaction workloads, we observe an average overhead of 2.8x in throughput and 2.5x in latency, compared to the (insecure) system with no integrity checks --- using CPU parallelism, we bring the throughput overhead down to 1.05x

    Witnessing Stories: The Transformative Impact of Witnessing Performed Lived Experience Within the Context of Family Life Education

    Get PDF
    This study examined how theatre and performances based on lived experience can be used as a tool within family life education (FLE) and foster transformative learning in the witness. FLE is a type of adult education that aims to provide individuals and families with knowledge, skills, and resources to assist them in living healthy lives and addressing the challenges that affect families throughout the lifespan. Transformation can be initiated in FLE programs when the learning methodologies encourage personal, critical reflection and emotional engagement. Therefore the goal of this research was to explore the elements that could be included in performances based on lived experience to evoke emotion and foster personal/critical reflection in the witness. This qualitative research utilized a grounded theory approach to collect and systematically analyze data to construct theory grounded in the data. Specifically, this study involved presenting a performance piece based on the lived experience of being a child in a family struggling with alcoholism, and collecting data from the witnesses in order to gain insight into the human experience of witnessing this performance piece based on lived experience, and the elements of the performance that witnesses identified as important and useful in fostering transformation. The results of this study generated recommendations for the creation of performances to be used within adult learning, applications to the context of FLE, and possibilities for future research

    Universally Scalable Concurrent Data Structures

    Get PDF
    The increase in the number of cores in processors has been an important trend over the past decade. In order to be able to efficiently use such architectures, modern software must be scalable: performance should increase proportionally to the number of allotted cores. While some software is inherently parallel, with threads seldom having to coordinate, a large fraction of software systems are based on shared state, to which access must be coordinated. This shared state generally comes in the form of a concurrent data structure. It is thus essential for these concurrent data structures to be correct, fast and scalable, regardless of the scenario (i.e.,different workloads, processors, memory units, programming abstractions). Nevertheless, few or no generic approaches exist that result in concurrent data structures which scale in a large spectrum of environments. This dissertation introduces a set of generic methods that allows to build - irrespective of the deployment environment - fast and scalable concurrent data structures. We start by identifying a set of sufficient conditions for concurrent search data structures to scale and perform well regardless of the workloads and processors they are running on.We introduce âasynchronized concurrencyâ, a paradigm consisting of four complementary programming patterns, which calls for the design of concurrent search data structures to resemble that of their sequential counterparts. Next, we show that there is virtually no practical situation in which one should seek a âtheoretically wait-freeâ algorithm at the expense of a state-of-the-art blocking algorithm in the case of search data structures: blocking algorithms are simple, fast, and can be made "practically wait-free". We then focus on the memory unit, and provide a method yielding fast concurrent data structures even when the memory is non-volatile, and structures must be recoverable in case of a transient failure. We start by introducing a generic technique that allows us to avoid doing expensive writes to non-volatile memory by using a fast software cache. We also study memory management, and propose a solution tailored to concurrent data structures that uses coarse-grained memory management in order to avoid logging. Moreover, we argue for the use of lock-free algorithms in this non-volatile context, and show how by optimizing them we can avoid expensive logging operations. Together, the techniques we propose enable us to avoid any form of logging in the common case, thus significantly improving concurrent data structure performance when using non-volatile RAM. Finally, we go beyond basic interfaces, and look at scalable partitioned data structures implemented through a transactional interface. We present multiversion timestamp locking (MVTL),a new genre of multiversion concurrency control algorithms for serializable transactions. The key idea behind MVTL is simple and novel: lock individual time points instead of locking objects or versions. We provide several MVTL-based algorithms, that address limitations of current concurrency-control schemes. In short, by spanning workloads, processors, storage abstractions, and system sizes, this dissertation takes a step towards concurrent data structures that are universally scalable
    corecore