873 research outputs found

    Area and Energy Opimized QCA Based Shuffle-Exchange Network with Multicast and Broadcast Configuration

    Get PDF
    In any wide-range processing system, rapid interconnecting networks are employed between the processing modules and embedded systems. This study deals with the optimized design and implementation of Switching Element (SE) which operates in four modes, accepting two inputs and delivering two outputs. The Shuffle-Exchange Network (SEN) can be used as a single-stage as well as a multi-stage network. SEN is used as an interconnection architecture which is implemented with exclusive input-output paths with simple design. The SE acts as a building block to the Multi-stage Shuffle-Exchange Network (M-SEN) with facilities to perform unicast and multicast operation on the inputs. An 8x8 M-SEN model is also implemented, which works in three modes of communication, termed as "One-to-One", "One-to-Many" and "One-to-All" M-SEN configuration. All the QCA circuits have been implemented and simulated using CAD tool QCADesigner. The proposed QCA-based M-SEN design is better in terms of area occupied by 14.63%, average energy dissipation by 22.75% and cell count with a reduction of 84 cells when compared to reference M-SEN architecture. The optimization of the design in terms of cell count and area results in lesser energy dissipation and hence can be used in future-generation complex networks and communication systems

    Resilient and Trustworthy Dynamic Data-driven Application Systems (DDDAS) Services for Crisis Management Environments

    Get PDF
    Future crisis management systems needresilient and trustworthy infrastructures to quickly develop reliable applications and processes, andensure end-to-end security, trust, and privacy. Due to the multiplicity and diversity of involved actors, volumes of data, and heterogeneity of shared information;crisis management systems tend to be highly vulnerable and subjectto unforeseen incidents. As a result, the dependability of crisis management systems can be at risk. This paper presents a cloud-based resilient and trustworthy infrastructure (known as rDaaS) to quickly develop secure crisis management systems. The rDaaS integrates the Dynamic Data-Driven Application Systems (DDDAS) paradigm into a service-oriented architecture over cloud technology and provides a set of resilient DDDAS-As-A Service (rDaaS) components to build secure and trusted adaptable crisis processes. The rDaaS also ensures resilience and security by obfuscating the execution environment and applying Behavior Software Encryption and Moving Technique Defense. A simulation environment for a nuclear plant crisis management case study is illustrated to build resilient and trusted crisis response processes

    Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -

    Get PDF
    The Internet today provides the environment for novel applications and processes which may evolve way beyond pre-planned scope and purpose. Security analysis is growing in complexity with the increase in functionality, connectivity, and dynamics of current electronic business processes. Technical processes within critical infrastructures also have to cope with these developments. To tackle the complexity of the security analysis, the application of models is becoming standard practice. However, model-based support for security analysis is not only needed in pre-operational phases but also during process execution, in order to provide situational security awareness at runtime. This cumulative thesis provides three major contributions to modelling methodology. Firstly, this thesis provides an approach for model-based analysis and verification of security and safety properties in order to support fault prevention and fault removal in system design or redesign. Furthermore, some construction principles for the design of well-behaved scalable systems are given. The second topic is the analysis of the exposition of vulnerabilities in the software components of networked systems to exploitation by internal or external threats. This kind of fault forecasting allows the security assessment of alternative system configurations and security policies. Validation and deployment of security policies that minimise the attack surface can now improve fault tolerance and mitigate the impact of successful attacks. Thirdly, the approach is extended to runtime applicability. An observing system monitors an event stream from the observed system with the aim to detect faults - deviations from the specified behaviour or security compliance violations - at runtime. Furthermore, knowledge about the expected behaviour given by an operational model is used to predict faults in the near future. Building on this, a holistic security management strategy is proposed. The architecture of the observing system is described and the applicability of model-based security analysis at runtime is demonstrated utilising processes from several industrial scenarios. The results of this cumulative thesis are provided by 19 selected peer-reviewed papers

    Towards Network-Accelerated Databases

    Get PDF
    Throughout the last years, data processing systems have seen substantial changes, notably moving towards disaggregation of resources. This shift separates compute and storage resources into distinct servers for better resource utilization, as they can now be scaled independently based on demand. This development is crucial for cloud-native Database Management Systems (DBMS), which mainly build on such disaggregated structures. This thesis examines two significant hardware trends in disaggregated architectures for DBMSs: modern networks and heterogeneous computing. Modern networks such as Remote Direct Memory Access (RDMA) are critical for efficient, high-throughput, low-latency data transfer, but present challenges for achieving optimal performance for DBMSs. The reason for this is that RDMA comes with a low-level interface with a plentitude of performance-critical aspects to consider. To address this challenge, this thesis introduces a high-level programming interface, the Data Flow Interface, specifically targeting the needs of data-intensive processing systems. In addition, this thesis highlights the emerging trend toward programmable network devices that offer data processing capabilities in the network. This trend is especially interesting for distributed DBMSs as they have to transfer large amounts of data over the network due to the disaggregated architecture, but also typical distributed data processing operations such as joins have to shuffle data between compute nodes. In the thesis, in-network processing devices are evaluated with typical DBMS operations to investigate the benefits and potential shortcomings. Another trend in the data center is the increasing heterogeneity of computing units such as GPUs and FPGAs due to their fast processing capabilities. Incorporating these heterogeneous devices into disaggregated architectures with fast networks has many merits. The reason is that specialized compute units can be exposed as network-attached disaggregated accelerator pools and thus provide flexible and scalable high-performance data processing. This integration of heterogeneous compute units and fast RDMA-capable networks is however non-trivial since networks like RDMA are typically not directly supported for devices besides CPUs and are as such non-trivial to integrate efficiently. The challenge of how to achieve efficient communication between different types of compute devices is addressed by proposing a network-driven communication scheme that leverages a programmable switch to carry out the network communication on behalf of the compute devices

    Smart Street Lights and Mobile Citizen Apps for Resilient Communication in a Digital City

    Full text link
    Currently, nearly four billion people live in urban areas. Since this trend is increasing, natural disasters or terrorist attacks in such areas affect an increasing number of people. While information and communication technology is crucial for the operation of urban infrastructures and the well-being of its inhabitants, current technology is quite vulnerable to disruptions of various kinds. In future smart cities, a more resilient urban infrastructure is imperative to handle the increasing number of hazardous situations. We present a novel resilient communication approach based on smart street lights as part of the public infrastructure. It supports people in their everyday life and adapts its functionality to the challenges of emergency situations. Our approach relies on various environmental sensors and in-situ processing for automatic situation assessment, and a range of communication mechanisms (e.g., public WiFi hotspot functionality and mesh networking) for maintaining a communication network. Furthermore, resilience is not only achieved based on infrastructure deployed by a digital city's municipality, but also based on integrating citizens through software that runs on their mobile devices (e.g., smartphones and tablets). Web-based zero-installation and platform-agnostic apps can switch to device-to-device communication to continue benefiting people even during a disaster situation. Our approach, featuring a covert channel for professional responders and the zero-installation app, is evaluated through a prototype implementation based on a commercially available street light.Comment: 2019 IEEE Global Humanitarian Technology Conference (GHTC

    Applying Secure Multi-party Computation in Practice

    Get PDF
    In this work, we present solutions for technical difficulties in deploying secure multi-party computation in real-world applications. We will first give a brief overview of the current state of the art, bring out several shortcomings and address them. The main contribution of this work is an end-to-end process description of deploying secure multi-party computation for the first large-scale registry-based statistical study on linked databases. Involving large stakeholders like government institutions introduces also some non-technical requirements like signing contracts and negotiating with the Data Protection Agency

    DeMMon Decentralized Management and Monitoring Framework

    Get PDF
    The centralized model proposed by the Cloud computing paradigm mismatches the decentralized nature of mobile and IoT applications, given the fact that most of the data production and consumption is performed by end-user devices outside of the Data Center (DC). As the number of these devices grows, and given the need to transport data to and from DCs for computation, application providers incur additional infrastructure costs, and end-users incur delays when performing operations. These reasons have led us into a post-cloud era, where a new computing paradigm arose: Edge Computing. Edge Computing takes into account the broad spectrum of devices residing outside of the DC, closer to the clients, as potential targets for computations, potentially reducing infrastructure costs, improving the quality of service (QoS) for end-users and allowing new interaction paradigms between users and applications. Managing and monitoring the execution of these devices raises new challenges previously unaddressed by Cloud computing, given the scale of these systems and the devices’ (potentially) unreliable data connections and heterogenous computational power. The study of the state-of-the-art has revealed that existing resource monitoring and management solutions require manual configuration and have centralized components, which we believe do not scale for larger-scale systems. In this work, we address these limitations by presenting a novel Decentralized Management and Monitoring (“DeMMon”) system, targeted for edge settings. DeMMon provides primitives to ease the development of tools that manage computational resources that support edge-enabled applications, decomposed in components, through decentralized actions, taking advantage of partial knowledge of the system. Our solution was evaluated to amount to its benefits regarding information dissemination and monitoring capabilities across a set of realistic emulated scenarios of up to 750 nodes with variable failure rates. The results show the validity of our approach and that it can outperform state-of-the-art solutions regarding scalability and reliabilityO modelo centralizado de computação utilizado no paradigma da Computação na Nuvem apresenta limitações no contexto de aplicações no domínio da Internet das Coisas e aplicações móveis. Neste tipo de aplicações, os dados são produzidos e consumidos maioritariamente por dispositivos que se encontram na periferia da rede. Desta forma, transportar estes dados de e para os centros de dados impõe uma carga excessiva nas infraestruturas de rede que ligam os dispositivos aos centros de dados, aumentando a latência de respostas e diminuindo a qualidade de serviço para os utilizadores. Para combater estas limitações, surgiu o paradigma da Computação na Periferia, este paradigma propõe a execução de computações, e potencialmente armazenamento de dados, em dispositivos fora dos centros de dados, mais perto dos clientes, reduzindo custos e criando um novo leque de possibilidades para efetuar computações distribuídas mais próximas dos dispositivos que produzem e consomem os dados. Contudo, gerir e supervisionar a execução desses dispositivos levanta obstáculos não equacionados pela Computação na Nuvem, como a escala destes sistemas, ou a variabilidade na conectividade e na capacidade de computação dos dispositivos que os compõem. O estudo da literatura revela que ferramentas populares para gerir e supervisionar aplicações e dispositivos possuem limitações para a sua escalabilidade, como por exemplo, pontos de falha centralizados, ou requerem a configuração manual de cada dispositivo. Nesta dissertação, propõem-se uma nova solução de monitorização e disseminação de informação descentralizada. Esta solução oferece operações que permitem recolher informação sobre o estado do sistema, de modo a ser utilizada por soluções (também descentralizadas) que gerem aplicações especializadas para executar na periferia da rede. A nossa solução foi avaliada em redes emuladas de várias dimensões com um máximo de 750 nós, no contexto de disseminação e de monitorização de informação. Os nossos resultados mostram que o nosso sistema consegue ser mais robusto ao mesmo tempo que é mais escalável quando comparado com o estado da arte

    A Tale of Two Direction Codes in Rat Retrosplenial Cortex: Uncovering the Neural Basis of Spatial Orientation in Complex Space

    Get PDF
    Head direction (HD) cells only become active whenever a rat faces one direction and stay inactive when it faces others, producing a unimodal activity distribution. Working together in a network, HD cells are considered the neural basis supporting a sense of direction. The retrosplenial cortex (RSC) is part of the HD circuit and contains neurons that express multiple spatial signals, including a pattern of bipolar directional tuning – as recently reported in rats exploring a rotationally symmetric two-compartment space. This suggests an unexplored mechanism of the neural compass. In this thesis, I investigated whether the association between the two-way firing symmetry and twofold environment symmetry reveals a general environment symmetry-encoding property of these RSC neurons. I recorded RSC neurons in environments having onefold, twofold and fourfold symmetry. The current study showed that RSC HD cells maintained a consistent global signal, whereas other RSC directional cells showed multi-fold symmetric firing patterns that reflected environment symmetry, not just globally (across all sub-compartments) but also locally (within each sub-compartment). The analyses also showed that the pattern was independent of egocentric boundary vector coding but represented an allocentric spatial code. It means that these RSC cells use environmental cues to organise multiple singular tuning curves which sometimes are combined to form a multidirectional pattern, likely via an interaction with the global HD signal. Thus, both local and global environment symmetry are encoded by local firing patterns in subspaces. This interestingly suggests cognitive mapping and abstraction of space beyond immediate perceptual bounds in RSC. The data generated from this study provides important insights for modelling of direction computation. Taken together, I discuss how having two types of direction codes in RSC may help us to orient more accurately and flexibly in complex and ambiguous space

    PiCo: A Domain-Specific Language for Data Analytics Pipelines

    Get PDF
    In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models—for which only informal (and often confusing) semantics is generally provided—all share a common under- lying model, namely, the Dataflow model. Using this model as a starting point, it is possible to categorize and analyze almost all aspects about Big Data analytics tools from a high level perspective. This analysis can be considered as a first step toward a formal model to be exploited in the design of a (new) framework for Big Data analytics. By putting clear separations between all levels of abstraction (i.e., from the runtime to the user API), it is easier for a programmer or software designer to avoid mixing low level with high level aspects, as we are often used to see in state-of-the-art Big Data analytics frameworks. From the user-level perspective, we think that a clearer and simple semantics is preferable, together with a strong separation of concerns. For this reason, we use the Dataflow model as a starting point to build a programming environment with a simplified programming model implemented as a Domain-Specific Language, that is on top of a stack of layers that build a prototypical framework for Big Data analytics. The contribution of this thesis is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm, Google Dataflow), thus making it easier to understand high-level data-processing applications written in such frameworks. As result of this analysis, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level. Second, we propose a programming environment based on such layered model in the form of a Domain-Specific Language (DSL) for processing data collections, called PiCo (Pipeline Composition). The main entity of this programming model is the Pipeline, basically a DAG-composition of processing elements. This model is intended to give the user an unique interface for both stream and batch processing, hiding completely data management and focusing only on operations, which are represented by Pipeline stages. Our DSL will be built on top of the FastFlow library, exploiting both shared and distributed parallelism, and implemented in C++11/14 with the aim of porting C++ into the Big Data world

    Profiling Large-scale Live Video Streaming and Distributed Applications

    Get PDF
    PhDToday, distributed applications run at data centre and Internet scales, from intensive data analysis, such as MapReduce; to the dynamic demands of a worldwide audience, such as YouTube. The network is essential to these applications at both scales. To provide adequate support, we must understand the full requirements of the applications, which are revealed by the workloads. In this thesis, we study distributed system applications at different scales to enrich this understanding. Large-scale Internet applications have been studied for years, such as social networking service (SNS), video on demand (VoD), and content delivery networks (CDN). An emerging type of video broadcasting on the Internet featuring crowdsourced live video streaming has garnered attention allowing platforms such as Twitch to attract over 1 million concurrent users globally. To better understand Twitch, we collected real-time popularity data combined with metadata about the contents and found the broadcasters rather than the content drives its popularity. Unlike YouTube and Netflix where content can be cached, video streaming on Twitch is generated instantly and needs to be delivered to users immediately to enable real-time interaction. Thus, we performed a large-scale measurement of Twitchs content location revealing the global footprint of its infrastructure as well as discovering the dynamic stream hosting and client redirection strategies that helped Twitch serve millions of users at scale. We next consider applications that run inside the data centre. Distributed computing applications heavily rely on the network due to data transmission needs and the scheduling of resources and tasks. One successful application, called Hadoop, has been widely deployed for Big Data processing. However, little work has been devoted to understanding its network. We found the Hadoop behaviour is limited by hardware resources and processing jobs presented. Thus, after characterising the Hadoop traffic on our testbed with a set of benchmark jobs, we built a simulator to reproduce Hadoops job traffic With the simulator, users can investigate the connections between Hadoop traffic and network performance without additional hardware cost. Different network components can be added to investigate the performance, such as network topologies, queue policies, and transport layer protocols. In this thesis, we extended the knowledge of networking by investigated two widelyused applications in the data centre and at Internet scale. We (i)studied the most popular live video streaming platform Twitch as a new type of Internet-scale distributed application revealing that broadcaster factors drive the popularity of such platform, and we (ii)discovered the footprint of Twitch streaming infrastructure and the dynamic stream hosting and client redirection strategies to provide an in-depth example of video streaming delivery occurring at the Internet scale, also we (iii)investigated the traffic generated by a distributed application by characterising the traffic of Hadoop under various parameters, (iv)with such knowledge, we built a simulation tool so users can efficiently investigate the performance of different network components under distributed applicationQueen Mary University of Londo
    corecore