1,516 research outputs found

    Computer programs: Special applications. A compilation

    Get PDF
    Computer programs are reported of technological developments in: management techniques, measurements and testing programs, and navigation and tracking programs. Machine requirements, program language, and the reporting source are included for the dissemination of information

    Extensible Performance-Aware Runtime Integrity Measurement

    Get PDF
    Today\u27s interconnected world consists of a broad set of online activities including banking, shopping, managing health records, and social media while relying heavily on servers to manage extensive sets of data. However, stealthy rootkit attacks on this infrastructure have placed these servers at risk. Security researchers have proposed using an existing x86 CPU mode called System Management Mode (SMM) to search for rootkits from a hardware-protected, isolated, and privileged location. SMM has broad visibility into operating system resources including memory regions and CPU registers. However, the use of SMM for runtime integrity measurement mechanisms (SMM-RIMMs) would significantly expand the amount of CPU time spent away from operating system and hypervisor (host software) control, resulting in potentially serious system impacts. To be a candidate for production use, SMM RIMMs would need to be resilient, performant and extensible. We developed the EPA-RIMM architecture guided by the principles of extensibility, performance awareness, and effectiveness. EPA-RIMM incorporates a security check description mechanism that allows dynamic changes to the set of resources to be monitored. It minimizes system performance impacts by decomposing security checks into shorter tasks that can be independently scheduled over time. We present a performance methodology for SMM to quantify system impacts, as well as a simulator that allows for the evaluation of different methods of scheduling security inspections. Our SMM-based EPA-RIMM prototype leverages insights from the performance methodology to detect host software rootkits at reduced system impacts. EPA-RIMM demonstrates that SMM-based rootkit detection can be made performance-efficient and effective, providing a new tool for defense

    High tech automated bottling process for small to medium scale enterprises using PLC, scada and basic industry 4.0 concepts

    Get PDF
    The automation of industrial processes has been one of the greatest innovations in the industrial sector. It allows faster and accurate operations of production processes while producing more outputs than old manual production techniques. In the beverage industry, this innovation was also well embraced, especially to improve its bottling processes. However it has been proven that a continuous optimization of automation techniques using advanced and current trend of automation is the only way industrial companies will survive in a very competitive market. This becomes more challenging for small to medium scale enterprises (SMEs) which are not always keen in adopting new technologies by fear of overspending their little revenues. By doing so, SMEs are exposing themselves to limited growth and vulnerable lifecycle in this fast growing automation world. The main contribution of this study was to develop practical and affordable applications that will optimize the bottling process of a SME beverage plant by combining its existing production resources to basic principles of the current trend of automation, Industry 4.0 (I40). This research enabled the small beverage industry to achieve higher production rate, better delivery time and easy access of plant information through production forecast using linear regression, predictive maintenance using speed vibration sensor and decentralization of production monitoring via cloud applications. The existing plant Siemens S7-1200 programmable logic controller (PLC) and ZENON supervisory control and data acquisition (SCADA) system were used to program the optimized process with very few additional resources. This study also opened doors for automation in SMEs, in general, to use I40 in their production processes with available means and limited cost.School of ComputingM.Tech (Engineering, Electrical

    Hardware Mechanisms for Efficient Memory System Security

    Full text link
    The security of a computer system hinges on the trustworthiness of the operating system and the hardware, as applications rely on them to protect code and data. As a result, multiple protections for safeguarding the hardware and OS from attacks are being continuously proposed and deployed. These defenses, however, are far from ideal as they only provide partial protection, require complex hardware and software stacks, or incur high overheads. This dissertation presents hardware mechanisms for efficiently providing strong protections against an array of attacks on the memory hardware and the operating system’s code and data. In the first part of this dissertation, we analyze and optimize protections targeted at defending memory hardware from physical attacks. We begin by showing that, contrary to popular belief, current DDR3 and DDR4 memory systems that employ memory scrambling are still susceptible to cold boot attacks (where the DRAM is frozen to give it sufficient retention time and is then re-read by an attacker after reboot to extract sensitive data). We then describe how memory scramblers in modern memory controllers can be transparently replaced by strong stream ciphers without impacting performance. We also demonstrate how the large storage overheads associated with authenticated memory encryption schemes (which enable tamper-proof storage in off-chip memories) can be reduced by leveraging compact integer encodings and error-correcting code (ECC) DRAMs – without forgoing the error detection and correction capabilities of ECC DRAMs. The second part of this dissertation presents Neverland: a low-overhead, hardware-assisted, memory protection scheme that safeguards the operating system from rootkits and kernel-mode malware. Once the system is done booting, Neverland’s hardware takes away the operating system’s ability to overwrite certain configuration registers, as well as portions of its own physical address space that contain kernel code and security-critical data. Furthermore, it prohibits the CPU from fetching privileged code from any memory region lying outside the physical addresses assigned to the OS kernel and drivers. This combination of protections makes it extremely hard for an attacker to tamper with the kernel or introduce new privileged code into the system – even in the presence of software vulnerabilities. Neverland enables operating systems to reduce their attack surface without having to rely on complex integrity monitoring software or hardware. The hardware mechanisms we present in this dissertation provide building blocks for constructing a secure computing base while incurring lower overheads than existing protections.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147604/1/salessaf_1.pd

    Advanced information processing system for advanced launch system: Avionics architecture synthesis

    Get PDF
    The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described

    The evolution of energy requirements of smartphones based on user behaviour and implications of the COVID-19 era

    Get PDF
    Smartphones have evolved to become frequent companions to humans. The common problem shared by Android users of smartphones was, and continues to be, about saving their batteries and preventing the need to use any recharging tools. A significant number of studies have been performed in the general field of "saving energy in smartphones". During a state of global lockdown, the use of smartphone devices has skyrocketed, and many governments have implemented location-tracking applications for their citizens as means of ensuring that the imposed governmental restrictions are being adhered to. Since smartphones are battery-powered, the opportunity to conserve electricity and ensure that the handset does not have to be charged so much or that it does not die and impede location-tracking during this period of crisis is of vital significance, impacting not only the reliability of tracking, but also the usability of the mobile itself. While there are methods to reduce the battery’s drain from mobile app use, they are not fully utilized by users. Simultaneously, the following the manuscript demonstrates the growing prevalence of mobile applications in daily lives, as well as the disproportionally increasing phone functionality, which results in the creation of a dependency towards smartphone use and the need of energy to recharge and operate theses smartphones

    Investigating Real Work Situations in Translation Agencies. Work Content and Its Components

    Get PDF
    The aim of this article1 is to analyze work content and its components in translation agencies. In the conceptual part of the article, we refer to concepts taken from the sociology of work and translation studies. In the analytical part, we use data produced by Stelmach in her study carried out in a small translation agency in Poland using the technique of self-observation (Stelmach 2000). The aim of Stelmach’s study was to record and analyze all the activities that formpart of the production process of a translation service. Because the observation was continual in time, it provided a complete list of all the activities carried out by the permanent staff occupying two internal jobs. Stelmach’s approach was quantitative and not focused on specifi c, translation-related work organization. In this article, we reinterpret these activities as the content of the work in translation-related internal positions, and compare it with Gouadec’s model oftranslation service provision process (Gouadec 2002, 2005a, 2005b, 2007). The data analyzed show the importance of outsourcing in the everyday activity of translation agencies and (partly as a consequence of this outsourcing) the magnitude and importance of the management activities carried out by the staff

    Fault Tolerant Electronic System Design

    Get PDF
    Due to technology scaling, which means reduced transistor size, higher density, lower voltage and more aggressive clock frequency, VLSI devices may become more sensitive against soft errors. Especially for those devices used in safety- and mission-critical applications, dependability and reliability are becoming increasingly important constraints during the development of system on/around them. Other phenomena (e.g., aging and wear-out effects) also have negative impacts on reliability of modern circuits. Recent researches show that even at sea level, radiation particles can still induce soft errors in electronic systems. On one hand, processor-based system are commonly used in a wide variety of applications, including safety-critical and high availability missions, e.g., in the automotive, biomedical and aerospace domains. In these fields, an error may produce catastrophic consequences. Thus, dependability is a primary target that must be achieved taking into account tight constraints in terms of cost, performance, power and time to market. With standards and regulations (e.g., ISO-26262, DO-254, IEC-61508) clearly specify the targets to be achieved and the methods to prove their achievement, techniques working at system level are particularly attracting. On the other hand, Field Programmable Gate Array (FPGA) devices are becoming more and more attractive, also in safety- and mission-critical applications due to the high performance, low power consumption and the flexibility for reconfiguration they provide. Two types of FPGAs are commonly used, based on their configuration memory cell technology, i.e., SRAM-based and Flash-based FPGA. For SRAM-based FPGAs, the SRAM cells of the configuration memory highly susceptible to radiation induced effects which can leads to system failure; and for Flash-based FPGAs, even though their non-volatile configuration memory cells are almost immune to Single Event Upsets induced by energetic particles, the floating gate switches and the logic cells in the configuration tiles can still suffer from Single Event Effects when hit by an highly charged particle. So analysis and mitigation techniques for Single Event Effects on FPGAs are becoming increasingly important in the design flow especially when reliability is one of the main requirements

    Resilience of an embedded architecture using hardware redundancy

    Get PDF
    In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators
    • …
    corecore