1,866 research outputs found
Recommended from our members
A survey on online monitoring approaches of computer-based systems
This report surveys forms of online data collection that are in current use (as well as being the subject of research to adapt them to changing technology and demands), and can be used as inputs to assessment of dependability and resilience, although they are not primarily meant for this use
Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria
Hardware accelerated real-time Linux video anonymizer
Dissertação de mestrado em Engenharia Eletrónica Industrial e ComputadoresOs Sistemas Embebidos estão presentes atualmente numa variada gama de equipamentos do
quotidiano do ser humano. Desde TV-boxes, televisões, routers até ao indispensável telemóvel.
O Sistema Operativo Linux, com a sua filosofia de distribuição ”one-size-fits-all” tornou-se
uma alternativa viável, fornecendo um vasto suporte de hardware, técnicas de depuração, suporte
dos protocolos de comunicação de rede, entre outros serviços, que se tornaram no conjunto
standard de requisitos na maioria dos sistemas embebidos atuais.
Este sistema operativo torna-se apelativo pela sua filosofia open-source que disponibiliza ao
utilizador um vasto conjunto de bibliotecas de software que possibilitam o desenvolvimento num
determinado domínio com maior celeridade e facilidade de integração de software complexo.
Os algoritmos deMachine Learning são desenvolvidos para a automização de tarefas e estão
presentes nas mais variadas tecnologias, desde o sistema de foco de imagem nosmartphone até
ao sistema de deteção dos limites de faixa de rodagem de um sistema de condução autónoma.
Estes são algoritmos que quando compilados para as plataformas de sistemas embebidos,
resultam num esforço de processamento e de consumo de recursos, como o footprint de memória,
que na maior parte dos casos supera em larga escala o conjunto de recursos disponíveis para a
aplicação do sistema, sendo necessária a implementação de componentes que requerem maior
poder de processamento através de elementos de hardware para garantir que as métricas tem porais sejam satisfeitas.
Esta dissertação propõe-se, por isso, à criação de um sistema de anonimização de vídeo
que adquire, processa e manipula as frames, com o intuito de garantir o anonimato, mesmo na
transmissão.
A sua implementação inclui técnicas de Deteção de Objectos, fazendo uso da combinação
das tecnologias de aceleração por hardware: paralelização e execução em hardware especial izado. É proposta então uma implementação restringida tanto temporalmente como no consumo
de recursos ao nível do hardware e software.Embedded Systems are currently present in a wide range of everyday equipment. From TV-boxes,
televisions and routers to the indispensable smartphone.
Linux Operating System, with its ”one-size-fits-all” distribution philosophy, has become a
viable alternative, providing extensive support for hardware, debugging techniques, network com munication protocols, among other functionalities, which have become the standard set of re quirements in most modern embedded systems.
This operating system is appealing due to its open-source philosophy, which provides the
user with a vast set of software libraries that enable development in a given domain with greater
speed and ease the integration of complex software.
Machine Learning algorithms are developed to execute tasks autonomously, i.e., without
human supervision, and are present in the most varied technologies, from the image focus system
on the smartphone to the detection system of the lane limits of an autonomous driving system.
These are algorithms that, when compiled for embedded systems platforms, require an ef fort to process and consume resources, such as the memory footprint, which in most cases far
outweighs the set of resources available for the application of the system, requiring the imple mentation of components that need greater processing power through elements of hardware to
ensure that the time metrics are satisfied.
This dissertation proposes the creation of a video anonymization system that acquires, pro cesses, and manipulates the frames, in order to guarantee anonymity, even during the transmis sion.
Its implementation includes Object Detection techniques, making use of the combination
of hardware acceleration technologies: parallelization and execution in specialized hardware.
An implementation is then proposed, restricted both in time and in resource consumption at
hardware and software levels
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
Programming Persistent Memory
Beginning and experienced programmers will use this comprehensive guide to persistent memory programming. You will understand how persistent memory brings together several new software/hardware requirements, and offers great promise for better performance and faster application startup times—a huge leap forward in byte-addressable capacity compared with current DRAM offerings. This revolutionary new technology gives applications significant performance and capacity improvements over existing technologies. It requires a new way of thinking and developing, which makes this highly disruptive to the IT/computing industry. The full spectrum of industry sectors that will benefit from this technology include, but are not limited to, in-memory and traditional databases, AI, analytics, HPC, virtualization, and big data. Programming Persistent Memory describes the technology and why it is exciting the industry. It covers the operating system and hardware requirements as well as how to create development environments using emulated or real persistent memory hardware. The book explains fundamental concepts; provides an introduction to persistent memory programming APIs for C, C++, JavaScript, and other languages; discusses RMDA with persistent memory; reviews security features; and presents many examples. Source code and examples that you can run on your own systems are included. What You’ll Learn Understand what persistent memory is, what it does, and the value it brings to the industry Become familiar with the operating system and hardware requirements to use persistent memory Know the fundamentals of persistent memory programming: why it is different from current programming methods, and what developers need to keep in mind when programming for persistence Look at persistent memory application development by example using the Persistent Memory Development Kit (PMDK) Design and optimize data structures for persistent memory Study how real-world applications are modified to leverage persistent memory Utilize the tools available for persistent memory programming, application performance profiling, and debugging Who This Book Is For C, C++, Java, and Python developers, but will also be useful to software, cloud, and hardware architects across a broad spectrum of sectors, including cloud service providers, independent software vendors, high performance compute, artificial intelligence, data analytics, big data, etc
Embedding runtime verification post-deployment for real-time health management of safety-critical systems
As cyber-physical systems increase in both complexity and criticality, formal methods have gained traction for design-time verification of safety properties.
A lightweight formal method, runtime verification (RV), embeds checks necessary for safety-critical system health management; however, these techniques have been slow to appear in practice despite repeated calls by both industry and academia to leverage them.
Additionally, the state-of-the-art in RV lacks a best practice approach when a deployed system requires increased flexibility due to a change in mission, or in response to an emergent condition not accounted for at design time.
Human-robot interaction necessitates stringent safety guarantees to protect humans sharing the workspace, particularly in hazardous environments.
For example, Robonaut2 (R2) developed an emergent fault while deployed to the International Space Station.
Possibly-inaccurate actuator readings trigger the R2 safety system, preventing further motion of a joint until a ground-control operator determines the root-cause and initiates proper corrective action.
Operator time is scarce and expensive; when waiting, R2 is an obstacle instead of an asset.
We adapt the Realizable, Responsive, Unobtrusive Unit (R2U2) RV framework for resource-constrained environments.
We retrofit the R2 motor controller, embedding R2U2 within the remaining resources of the Field-Programmable Gate Array (FPGA) controlling the joint actuator.
We add online, stream-based, real-time system health monitoring in a provably unobtrusive way that does not interfere with the control of the joint.
We design and embed formal temporal logic specifications that disambiguate the emergent faults and enable automated corrective actions.
We overview the challenges and techniques for formally specifying behaviors of an existing command and data bus.
We present our specification debugging, validation, and refinement steps.
We demonstrate success in the Robonaut2 case study, then detail effective techniques and lessons learned from adding RV with real-time fault disambiguation under the constraints of a deployed system
Recommended from our members
Improving System Reliability for Cyber-Physical Systems
Cyber-physical systems (CPS) are systems featuring a tight combination of, and coordination between, the system's computational and physical elements. Cyber-physical systems include systems ranging from critical infrastructure such as a power grid and transportation system to health and biomedical devices. System reliability, i.e., the ability of a system to perform its intended function under a given set of environmental and operational conditions for a given period of time, is a fundamental requirement of cyber-physical systems. An unreliable system often leads to disruption of service, financial cost and even loss of human life. An important and prevalent type of cyber-physical system meets the following criteria: processing large amounts of data; employing software as a system component; running online continuously; having operator-in-the-loop because of human judgment and an accountability requirement for safety critical systems. This thesis aims to improve system reliability for this type of cyber-physical system. To improve system reliability for this type of cyber-physical system, I present a system evaluation approach entitled automated online evaluation (AOE), which is a data-centric runtime monitoring and reliability evaluation approach that works in parallel with the cyber-physical system to conduct automated evaluation along the workflow of the system continuously using computational intelligence and self-tuning techniques and provide operator-in-the-loop feedback on reliability improvement. For example, abnormal input and output data at or between the multiple stages of the system can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop. The operator can then take actions and make changes to the system based on the alerts in order to achieve minimal system downtime and increased system reliability. One technique used by the approach is data quality analysis using computational intelligence, which applies computational intelligence in evaluating data quality in an automated and efficient way in order to make sure the running system perform reliably as expected. Another technique used by the approach is self-tuning which automatically self-manages and self-configures the evaluation system to ensure that it adapts itself based on the changes in the system and feedback from the operator. To implement the proposed approach, I further present a system architecture called autonomic reliability improvement system (ARIS). This thesis investigates three hypotheses. First, I claim that the automated online evaluation empowered by data quality analysis using computational intelligence can effectively improve system reliability for cyber-physical systems in the domain of interest as indicated above. In order to prove this hypothesis, a prototype system needs to be developed and deployed in various cyber-physical systems while certain reliability metrics are required to measure the system reliability improvement quantitatively. Second, I claim that the self-tuning can effectively self-manage and self-configure the evaluation system based on the changes in the system and feedback from the operator-in-the-loop to improve system reliability. Third, I claim that the approach is efficient. It should not have a large impact on the overall system performance and introduce only minimal extra overhead to the cyberphysical system. Some performance metrics should be used to measure the efficiency and added overhead quantitatively. Additionally, in order to conduct efficient and cost-effective automated online evaluation for data-intensive CPS, which requires large volumes of data and devotes much of its processing time to I/O and data manipulation, this thesis presents COBRA, a cloud-based reliability assurance framework. COBRA provides automated multi-stage runtime reliability evaluation along the CPS workflow using data relocation services, a cloud data store, data quality analysis and process scheduling with self-tuning to achieve scalability, elasticity and efficiency. Finally, in order to provide a generic way to compare and benchmark system reliability for CPS and to extend the approach described above, this thesis presents FARE, a reliability benchmark framework that employs a CPS reliability model, a set of methods and metrics on evaluation environment selection, failure analysis, and reliability estimation. The main contributions of this thesis include validation of the above hypotheses and empirical studies of ARIS automated online evaluation system, COBRA cloud-based reliability assurance framework for data-intensive CPS, and FARE framework for benchmarking reliability of cyber-physical systems. This work has advanced the state of the art in the CPS reliability research, expanded the body of knowledge in this field, and provided some useful studies for further research
- …