414 research outputs found

    Securing critical utility systems & network infrastructures

    Get PDF
    Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009As infra-estruturas críticas de TI para serviços públicos são apoiadas por inúmeros sistemas complexos. Estes sistemas permitem a gestão e recolha de informação em tempo-real, constituindo a base para a gestão eficiente das operações. A utilização, cada vez mais frequente, de software e hardware (Commercial Off-The-Shelf, COTS) em sistemas SCADA permitiu grandes beneficios financeiros na aquisição e desenvolvimento de soluções técnicas que suportam os serviços públicos. O uso de hardware e software COTS em sistemas SCADA transferiu para as infra-estruturas críticas os problemas de segurança de uma infraestrutura de TI empresarial. Neste contexto, um desafio para as equipas de gestão operacional dos sistemas de TI é a gestão eficaz dos sistemas e redes que compõem as infra-estruturas críticas dos serviços públicos. Apesar de estas organizações adoptarem, cada vez mais, normas e melhores práticas que visam melhorar a gestão, operações e processos de configuração. Este projecto de investigação propõe-se a desenvolver um estudo comparativo de plataformas de gestão integrada no contexto dos sistemas SCADA que suportam serviços públicos. Adicionalmente, este projecto de investigação irá desenvolver estudos acerca de perfis operacionais dos Sistemas Operativos que suportam a infra-estrutura IT dos serviços públicos críticos. Este projecto de investigação irá descrever como as decisões estratégicas de gestão têm impacto nas operações de gestão de uma infra-estrutura TI.Modern critical utility IT infrastructures are supported by numerous complex systems. These systems allow real-time management and information collection, which is the basis of efficient service management operations. The usage of commercial off-the-shelf (COTS) hardware and software in SCADA systems allowed for major financial advantages in purchasing and developing technical solutions. On the other hand, this COTS hardware and software generalized usage in SCADA systems, exposed critical infrastructures to the security problems of a corporate IT infrastructure. A significant challenge for IT teams is managing critical utility IT infrastructures even upon adopting security best practices that help management, operations and configuration of the systems and network components that comprise those infrastructures. This research project proposes to survey integrated management software that can address the specific security constraints of a SCADA infrastructure supported by COTS software. Additionally, this research project proposes to investigate techniques that will allow the creation of operational profiles of Operating Systems supporting critical utility IT infrastructures. This research project will describe how the strategic management decisions impact tactical operations management of an IT environment. We will investigate desirable technical management elements in support of the operational management

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Timed Automata Models for Principled Composition of Middleware

    Get PDF
    Middleware for Distributed Real-time and Embedded (DRE) systems has grown more and more complex in recent years due to the varying functional and temporal requirements of complex real-time applications. To enable DRE middleware to be configured and customized to meet the demands of different applications, a body of ongoing research has focused on applying model-driven development techniques to developing QoS-enabled middleware. While current approaches for modeling middleware focus on easing the task of as-assembling, deploying and configuring middleware and middleware-based applications, a more formal basis for correct middleware composition and configuration in the context of individual applications is needed. While the modeling community has used application-level formal models that are more abstract to uncover certain flaws in system design, a more fundamental and lower-level set of models is needed to be able to uncover more subtle safety and timing errors introduced by interference between application computations, particularly in the face of alternative concurrency strategies in the middleware layer. In this research, we have examined how detailed formal models of lower-level middle-ware building blocks provide an appropriate level of abstraction both for modeling and synthesis of a variety of kinds of middleware from these building blocks. When combined with model checking techniques, these formal models can help developers in composing correct combinations of middleware mechanisms, and configuring those mechanisms for each particular application

    Towards enabling cross-layer information sharing to improve today's content delivery systems

    Get PDF
    Content is omnipresent and without content the Internet would not be what it is today. End users consume content throughout the day, from checking the latest news on Twitter in the morning, to streaming music in the background (while working), to streaming movies or playing online games in the evening, and to using apps (e.g., sleep trackers) even while we sleep in the night. All of these different kinds of content have very specific and different requirements on a transport—on one end, online gaming often requires a low latency connection but needs little throughput, and, on the other, streaming a video requires high throughput, but it performs quite poorly under packet loss. Yet, all content is transferred opaquely over the same transport, adhering to a strict separation of network layers. Even a modern transport protocol such as Multi-Path TCP, which is capable of utilizing multiple paths, cannot take the (above) requirements or needs of that content into account for its path selection. In this work we challenge the layer separation and show that sharing information across the layers is beneficial for consuming web and video content. To this end, we created an event-based simulator for evaluating how applications can make informed decisions about which interfaces to use delivering different content based on a set of pre-defined policies that encode the (performance) requirements or needs of that content. Our policies achieve speedups of a factor of two in 20% of our cases, have benefits in more than 50%, and create no overhead in any of the cases. For video content we created a full streaming system that allows an even finer grained information sharing between the transport and the application. Our streaming system, called VOXEL, enables applications to select dynamically and on a frame granularity which video data to transfer based on the current network conditions. VOXEL drastically reduces video stalls in the 90th-percentile by up to 97% while not sacrificing the stream's visual fidelity. We confirmed our performance improvements in a real-user study where 84% of the participants clearly preferred watching videos streamed with VOXEL over the state-of-the-art.Inhalte sind allgegenwärtig und ohne Inhalte wäre das Internet nicht das, was es heute ist. Endbenutzer konsumieren Inhalte von früh bis spät - es beginnt am Morgen mit dem Lesen der neusten Nachrichten auf Twitter, dem online hören von Musik während der Arbeit, wird fortgeführt mit dem Schauen von Filmen über Online-Streaming Dienste oder dem spielen von Mehrspieler Online Spielen am Abend, und sogar dem, mit dem Internet synchronisierten, Überwachens des eigenen Schlafes in der Nacht. All diese verschiedenen Arten von Inhalten haben sehr spezifische und unterschiedliche Ansprüche an den Transport über das Internet - auf der einen Seite sind es Online Spiele, die eine sehr geringe Latenz, aber kaum Durchsatz benötigen, auf der Anderen gibt es Video-Streaming Dienste, die einen sehr hohen Datendurchsatz benötigen, aber, sehr nur schlecht mit Paketverlust umgehen können. Jedoch werden all diese Inhalte über den selben, undurchsichtigen, Transportweg übertragen, weil an eine strikte Unterteilung der Netzwerk- und Transportschicht festgehalten wird. Sogar ein modernes Übertragungsprotokoll wie MPTCP, welches es ermöglicht mehrere Netzwerkpfade zu nutzen, kann die (oben genannten) Anforderungen oder Bedürfnisse des Inhaltes, nicht für die Pfadselektierung, in Betracht ziehen. In dieser Arbeit fordern wir die Trennung der Schichten heraus und zeigen, dass ein Informationsaustausch zwischen den Netzwerkschichten von großem Vorteil für das Konsumieren von Webseiten und Video Inhalten sein kann. Hierzu haben wir einen Ereignisorientierten Simulator entwickelt, mit dem wir untersuchten wie Applikationen eine informierte Entscheidung darüber treffen können, welche Netzwerkschnittstellen für verschiedene Inhalte, basierend auf vordefinierten Regeln, welche die Leistungsvorgaben oder Bedürfnisse eines Inhalts kodieren, benutzt werden sollen. Unsere Regeln erreichen eine Verbesserung um einen Faktor von Zwei in 20% unserer Testfälle, haben einen Vorteil in mehr als 50% der Fälle und erzeugen in keinem Fall einen Mehraufwand. Für Video Inhalte haben wir ein komplettes Video-Streaming System entwickelt, welches einen noch feingranulareren Informationsaustausch zwischen der Applikation und des Transportes ermöglicht. Unser, VOXEL genanntes, System ermöglicht es Applikationen dynamisch und auf Videobild Granularität zu bestimmen welche Videodaten, entsprechend der aktuellen Netzwerksituation, übertragen werden sollen. VOXEL kann das stehenbleiben von Videos im 90%-Perzentil drastisch, um bis zu 97%, reduzieren, ohne dabei die visuelle Qualität des übertragenen Videos zu beeinträchtigen. Wir haben unsere Leistungsverbesserung in einer Studie mit echten Benutzern bestätigt, bei der 84% der Befragten es, im vergleich zum aktuellen Stand der Technik, klar bevorzugten Videos zu schauen, die über VOXEL übertragen wurden

    Transition Pathways towards Design Principles of Self-Sovereign Identity

    Get PDF
    Society\u27s accelerating digital transformation during the COVID-19 pandemic highlighted clearly that the Internet lacks a secure, efficient, and privacy-oriented model for identity. Self-sovereign identity (SSI) aims to address core weaknesses of siloed and federated approaches to digital identity management from both users\u27 and service providers\u27 perspectives. SSI emerged as a niche concept in libertarian communities, and was initially strongly associated with blockchain technology. Later, when businesses and governments began to invest, it quickly evolved towards a mainstream concept. To investigate this evolution and its effects on SSI, we conduct design science research rooted in the theory of technological transition pathways. Our study identifies nine core design principles of SSI as deployed in relevant applications, and discusses associated competing political and socio-technical forces in this space. Our results shed light on SSI\u27s key characteristics, its development pathway, and tensions in the transition between regimes of digital identity management

    Operating System Contribution to Composable Timing Behaviour in High-Integrity Real-Time Systems

    Get PDF
    The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry

    Coordinating Resource Use in Open Distributed Systems

    Get PDF
    In an open distributed system, computational resources are peer-owned, and distributed over time and space. The system is open to interactions with its environment, and the resources can dynamically join or leave the system, or can be discovered at runtime. This dynamicity leads to opportunities to carry out computations without statically owned resources, harnessing the collective compute power of the resources connected by the Internet. However, realizing this potential requires efficient and scalable resource discovery, coordination, and control, which present challenges in a dynamic, open environment. In this thesis, I present an approach to address these challenges by separating the functionality concerns of concurrent computations from those of coordinating their resource use, with the purpose of reducing programming complexity, and aiding development of correct, efficient, and resource-aware concurrent programs. As a first step towards effectively coordinating distributed resources, I developed DREAM, a Distributed Resource Estimation and Allocation Model, which enables computations to reason about future availability of resources. I then developed a fine-grained resource coordination scheme for distributed computations. The coordination scheme integrates DREAM-based resource reasoning into a distributed scheduler, for deciding and enforcing fine-grained resource-use schedules for distributed computations. To control the overhead caused by the coordination, a tuner is implemented which explicitly balances the overhead of the control mechanisms against the extent of control exercised. The effectiveness and performance of the resource coordination approach have been evaluated using a number of case studies. Experimental results show that the approach can effectively schedule computations for supporting various types of coordination objectives, such as ensuring Quality-of-Service, power-efficient execution, and dynamic load balancing. The overhead caused by the coordination mechanism is relatively modest, and adjustable through the tuner. In addition, the coordination mechanism does not add extra programming complexity to computations

    Accurate and Resource-Efficient Monitoring for Future Networks

    Get PDF
    Monitoring functionality is a key component of any network management system. It is essential for profiling network resource usage, detecting attacks, and capturing the performance of a multitude of services using the network. Traditional monitoring solutions operate on long timescales producing periodic reports, which are mostly used for manual and infrequent network management tasks. However, these practices have been recently questioned by the advent of Software Defined Networking (SDN). By empowering management applications with the right tools to perform automatic, frequent, and fine-grained network reconfigurations, SDN has made these applications more dependent than before on the accuracy and timeliness of monitoring reports. As a result, monitoring systems are required to collect considerable amounts of heterogeneous measurement data, process them in real-time, and expose the resulting knowledge in short timescales to network decision-making processes. Satisfying these requirements is extremely challenging given today’s larger network scales, massive and dynamic traffic volumes, and the stringent constraints on time availability and hardware resources. This PhD thesis tackles this important challenge by investigating how an accurate and resource-efficient monitoring function can be realised in the context of future, software-defined networks. Novel monitoring methodologies, designs, and frameworks are provided in this thesis, which scale with increasing network sizes and automatically adjust to changes in the operating conditions. These achieve the goal of efficient measurement collection and reporting, lightweight measurement- data processing, and timely monitoring knowledge delivery

    Addressing Complexity and Intelligence in Systems Dependability Evaluation

    Get PDF
    Engineering and computing systems are increasingly complex, intelligent, and open adaptive. When it comes to the dependability evaluation of such systems, there are certain challenges posed by the characteristics of “complexity” and “intelligence”. The first aspect of complexity is the dependability modelling of large systems with many interconnected components and dynamic behaviours such as Priority, Sequencing and Repairs. To address this, the thesis proposes a novel hierarchical solution to dynamic fault tree analysis using Semi-Markov Processes. A second aspect of complexity is the environmental conditions that may impact dependability and their modelling. For instance, weather and logistics can influence maintenance actions and hence dependability of an offshore wind farm. The thesis proposes a semi-Markov-based maintenance model called “Butterfly Maintenance Model (BMM)” to model this complexity and accommodate it in dependability evaluation. A third aspect of complexity is the open nature of system of systems like swarms of drones which makes complete design-time dependability analysis infeasible. To address this aspect, the thesis proposes a dynamic dependability evaluation method using Fault Trees and Markov-Models at runtime.The challenge of “intelligence” arises because Machine Learning (ML) components do not exhibit programmed behaviour; their behaviour is learned from data. However, in traditional dependability analysis, systems are assumed to be programmed or designed. When a system has learned from data, then a distributional shift of operational data from training data may cause ML to behave incorrectly, e.g., misclassify objects. To address this, a new approach called SafeML is developed that uses statistical distance measures for monitoring the performance of ML against such distributional shifts. The thesis develops the proposed models, and evaluates them on case studies, highlighting improvements to the state-of-the-art, limitations and future work
    corecore