215 research outputs found

    Dynamic Binary Translation for Embedded Systems with Scratchpad Memory

    Get PDF
    Embedded software development has recently changed with advances in computing. Rather than fully co-designing software and hardware to perform a relatively simple task, nowadays embedded and mobile devices are designed as a platform where multiple applications can be run, new applications can be added, and existing applications can be updated. In this scenario, traditional constraints in embedded systems design (i.e., performance, memory and energy consumption and real-time guarantees) are more difficult to address. New concerns (e.g., security) have become important and increase software complexity as well. In general-purpose systems, Dynamic Binary Translation (DBT) has been used to address these issues with services such as Just-In-Time (JIT) compilation, dynamic optimization, virtualization, power management and code security. In embedded systems, however, DBT is not usually employed due to performance, memory and power overhead. This dissertation presents StrataX, a low-overhead DBT framework for embedded systems. StrataX addresses the challenges faced by DBT in embedded systems using novel techniques. To reduce DBT overhead, StrataX loads code from NAND-Flash storage and translates it into a Scratchpad Memory (SPM), a software-managed on-chip SRAM with limited capacity. SPM has similar access latency as a hardware cache, but consumes less power and chip area. StrataX manages SPM as a software instruction cache, and employs victim compression and pinning to reduce retranslation cost and capture frequently executed code in the SPM. To prevent performance loss due to excessive code expansion, StrataX minimizes the amount of code inserted by DBT to maintain control of program execution. When a hardware instruction cache is available, StrataX dynamically partitions translated code among the SPM and main memory. With these techniques, StrataX has low performance overhead relative to native execution for MiBench programs. Further, it simplifies embedded software and hardware design by operating transparently to applications without any special hardware support. StrataX achieves sufficiently low overhead to make it feasible to use DBT in embedded systems to address important design goals and requirements

    On-Disk Data Processing: Issues and Future Directions

    Get PDF
    In this paper, we present a survey of "on-disk" data processing (ODDP). ODDP, which is a form of near-data processing, refers to the computing arrangement where the secondary storage drives have the data processing capability. Proposed ODDP schemes vary widely in terms of the data processing capability, target applications, architecture and the kind of storage drive employed. Some ODDP schemes provide only a specific but heavily used operation like sort whereas some provide a full range of operations. Recently, with the advent of Solid State Drives, powerful and extensive ODDP solutions have been proposed. In this paper, we present a thorough review of architectures developed for different on-disk processing approaches along with current and future challenges and also identify the future directions which ODDP can take.Comment: 24 pages, 17 Figures, 3 Table

    Generic on-board-computer hardware and software development for nanosatellite applications

    Get PDF
    This study outlines the results obtained from the development of a generic nanosatellite on-board-computer (OBC). The nanosatellite OBC is a non-mission specific design and as such it must be adaptable to changing mission requirements in order to be suitable for varying nanosatellite missions. Focus is placed on the commercial-off-the-shelf (COTS) principle where commercial components are used and evaluated for their potential performance in nanosatellite applications. The OBC design is prototyped and subjected to tests to evaluate its performance and its feasibility to survive in space

    Physically Dense Server Architectures.

    Full text link
    Distributed, in-memory key-value stores have emerged as one of today's most important data center workloads. Being critical for the scalability of modern web services, vast resources are dedicated to key-value stores in order to ensure that quality of service guarantees are met. These resources include: many server racks to store terabytes of key-value data, the power necessary to run all of the machines, networking equipment and bandwidth, and the data center warehouses used to house the racks. There is, however, a mismatch between the key-value store software and the commodity servers on which it is run, leading to inefficient use of resources. The primary cause of inefficiency is the overhead incurred from processing individual network packets, which typically carry small payloads, and require minimal compute resources. Thus, one of the key challenges as we enter the exascale era is how to best adjust to the paradigm shift from compute-centric to storage-centric data centers. This dissertation presents a hardware/software solution that addresses the inefficiency issues present in the modern data centers on which key-value stores are currently deployed. First, it proposes two physical server designs, both of which use 3D-stacking technology and low-power CPUs to improve density and efficiency. The first 3D architecture---Mercury---consists of stacks of low-power CPUs with 3D-stacked DRAM. The second architecture---Iridium---replaces DRAM with 3D NAND Flash to improve density. The second portion of this dissertation proposes and enhanced version of the Mercury server design---called KeyVault---that incorporates integrated, zero-copy network interfaces along with an integrated switching fabric. In order to utilize the integrated networking hardware, as well as reduce the response time of requests, a custom networking protocol is proposed. Unlike prior works on accelerating key-value stores---e.g., by completely bypassing the CPU and OS when processing requests---this work only bypasses the CPU and OS when placing network payloads into a process' memory. The insight behind this is that because most of the overhead comes from processing packets in the OS kernel---and not the request processing itself---direct placement of packet's payload is sufficient to provide higher throughput and lower latency than prior approaches.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111414/1/atgutier_1.pd

    Reification: A Process to Configure Java Realtime Processors

    Get PDF
    Real-time systems require stringent requirements both on the processor and the software application. The primary concern is speed and the predictability of execution times. In all real-time applications the developer must identify and calculate the worst case execution times (WCET) of their software. In almost all cases the processor design complexity impacts the analysis when calculating the WCET. Design features which impact this analysis include cache and instruction pipelining. With both cache and pipelining the time taken for a particular instruction can vary depending on cache and pipeline contents. When calculating the WCET the developer must ignore the speed advantages from these enhancements and use the normal instruction timings. This investigation is about a Java processor targeted to run within an FPGA environment (Java soft chip) supporting Java real-time applications. The investigation focuses on a simple processor design that allows simple analysis of WCET. The processor design has no cache and no instruction pipeline enhancements yet achieves higher performance than existing designs with these enhancements. The investigation centers on a process that translates Java byte codes and folds these translated codes into a modified Harvard Micro Controller (HMC). The modifications include better alignment with the application code and take advantage of the FPGA’s parallel capability. A prototyped ontology is used where the top level categories defined by Sowa are expanded to support the process. The proposed HMC and process are used to produce investigation results. Performance testing using the Sobel edge detection algorithm is used to compare the results with the only Java processor claiming real-time abilities

    Compare multimedia frameworks in mobile platforms

    Get PDF
    Multimedia feature is currently one of the most important features in mobile devices. Many modern mobile platforms use a centralized software stack to handle multimedia requirements that software stack is called multimedia framework. Multimedia framework belongs to the middleware layer of mobile operating system. It can be considered as a bridge that connects mobile operating system kernel, hardware drivers with UI applications. It supplies high level APIs that offers simple and easy solutions for complicated multimedia tasks to UI application developers. Multimedia Framework also manages and utilizes low lever system software and hardware in an efficient manner. It offers a centralize solution between high level demands and low level system resources. In this M.Sc. thesis project we have studied, analyzed and compared open source GStreamer, Android Stagefright and Microsoft Silverlight Media Framework from several perspectives. Some of the comparison perspectives are architecture, supported use cases, extensibility, implementation language and program language support (bindings), developer support, and legal status aspects. One of the main contributions of this thesis work is that clarifying in details the strength and weaknesses of each framework. Furthermore, the thesis should serve decision-making guidance when on needs to select a multimedia framework for a project. Moreover, and to enhance the impression with the three multimedia frameworks, a basic media player implementation is demonstrated with source code in the thesis.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Secure portable execution and storage environments: A capability to improve security for remote working

    Get PDF
    Remote working is a practice that provides economic benefits to both the employing organisation and the individual. However, evidence suggests that organisations implementing remote working have limited appreciation of the security risks, particularly those impacting upon the confidentiality and integrity of information and also on the integrity and availability of the remote worker’s computing environment. Other research suggests that an organisation that does appreciate these risks may veto remote working, resulting in a loss of economic benefits. With the implementation of high speed broadband, remote working is forecast to grow and therefore it is appropriate that improved approaches to managing security risks are researched. This research explores the use of secure portable execution and storage environments (secure PESEs) to improve information security for the remote work categories of telework, and mobile and deployed working. This thesis with publication makes an original contribution to improving remote work information security through the development of a body of knowledge (consisting of design models and design instantiations) and the assertion of a nascent design theory. The research was conducted using design science research (DSR), a paradigm where the research philosophies are grounded in design and construction. Following an assessment of both the remote work information security issues and threats, and preparation of a set of functional requirements, a secure PESE concept was defined. The concept is represented by a set of attributes that encompass the security properties of preserving the confidentiality, integrity and availability of the computing environment and data. A computing environment that conforms to the concept is considered to be a secure PESE, the implementation of which consists of a highly portable device utilising secure storage and an up-loadable (on to a PC) secure execution environment. The secure storage and execution environment combine to address the information security risks in the remote work location. A research gap was identified as no existing ‘secure PESE like’ device fully conformed to the concept, enabling a research problem and objectives to be defined. Novel secure storage and execution environments were developed and used to construct a secure PESE suitable for commercial remote work and a high assurance secure PESE suitable for security critical remote work. The commercial secure PESE was trialled with an existing telework team looking to improve security and the high assurance secure PESE was trialled within an organisation that had previously vetoed remote working due to the sensitivity of the data it processed. An evaluation of the research findings found that the objectives had been satisfied. Using DSR evaluation frameworks it was determined that the body of knowledge had improved an area of study with sufficient evidence generated to assert a nascent design theory for secure PESEs. The thesis highlights the limitations of the research while opportunities for future work are also identified. This thesis presents ten published papers coupled with additional doctoral research (that was not published) which postulates the research argument that ‘secure PESEs can be used to manage information security risks within the remote work environment’

    Introductory Computer Forensics

    Get PDF
    INTERPOL (International Police) built cybercrime programs to keep up with emerging cyber threats, and aims to coordinate and assist international operations for ?ghting crimes involving computers. Although signi?cant international efforts are being made in dealing with cybercrime and cyber-terrorism, ?nding effective, cooperative, and collaborative ways to deal with complicated cases that span multiple jurisdictions has proven dif?cult in practic

    Automated test suite–a validation package for mobile chipsets

    Get PDF
    With the diminishing sizes of transistors, it is now possible to incorporate complex systems on a single die. The chipsets used in mobile phone handsets are a good example of such a complex systems. The design, validation, hardware development and software development of such complex chipsets is an intricate task which also consumes lot of time. With the increasing competition, time-to-market factor plays a crucial role in the development of a product. There is a constant need for an automated platform which would help designers at various stages in the development process of a complex product with minimum efforts. Automated Test Suite (ATS) is an automated Diagnostic Test System that enables the users to automate, validation procedures for any ADI DBB (ANALOG DEVICES digital base band) chipsets and H/W platforms in a user-friendly environment. This software follows the HostTarget model ensuring easy implementation of test cases so that the user can concentrate on the testing module only. It provides good modularity and reusability with simple structure. ATS has several features, these are: • Communication between Host and Target via RS232 or USB, • Usable with ANVIL evaluation boards • Remote execution of test routines on target from host. • Enables h/w platform testing – can be extended for performance and characterization testing. • Backwards and forwards extendable to other chipset families and h/w platforms. • Provides ‘Help’ feature for all the tests to user. • Script based testing ensures customizable test routine development. • Test result log - HTML based details and summary of test results • GUI based test tool gives user-friendly interface. Debug tools have been developed for some of the Hardware modules (LED, GPIO). This project was implemented in three phases. First phase includes the formulating of ATS architecture and provide sample implementation for LEMANS (AD6900 MSP 500) DBB. The second phase is adding new platform DIONE (AD6722 MSP 430) DBB to the existing ATS. Third phase concentrates on designing and implementing new ATS architecture for efficient performance and to reduce development time for ATS

    Pyramic array: An FPGA based platform for many-channel audio acquisition

    Get PDF
    Array processing of audio data has many interesting applications: acoustic beamforming, source separation, indoor localization, room geometry estimation, etc. Recent advances in MEMS has produced tiny microphones, analog or even with digital converter integrated. This opens the door to create arrays with a massive number of microphones. We dub such an array many-channel by analogy to many-core processors.Microphone arrays techniques present compelling applications for robotic implementations. Those techniques can allow robots to listen to their environment and infer clues from it. Such features might enable capabilities such as natural interaction with humans, interpreting spoken commands or the localization of victims during search and rescue tasks. However, under noisy conditions robotic implementations of microphone arrays might degrade their precision when localizing sound sources. For practical applications, human hearing still leaves behind microphone arrays. Daniel Kisch is an example of how humans are able to efficiently perform echo-localization to recognize their environment, even in noisy and reverberant environments. For ubiquitous computing, another limitation of acoustic localization algorithms is within their capabilities of performing real-time Digital Signal Processing (DSP) operations. To tackle those problems, tradeoffs between size, weight, cost and power consumption compromise the design of acoustic sensors for practical applications. This work presents the design and operation of a large microphone array for DSP applications in realistic environments. To address those problems this project introduces the Pyramic sound capture system designed at LAP in EPFL. Pyramic is a custom hardware which possesses 48 microphones dis- tributed in the edges of a tetrahedron. The microphone arrays interact with a Terasic DE1-SoC board from Altera Cyclone V family devices, which combines a Hard Processor System (HPS) and a Field Programmable Gate Array (FPGA) in the same die. The HPS part integrates a dual- core ARM-based Cortex-A9 processor, which combined with the power of FPGA design suitable for processing multichannel microphone signals. This thesis explains the implementation of the Pyramic array. Moreover, FPGA-based hardware accelerators have been designed to imple- ment a Master SPI communication with the array and a parallel 48 channels FIR filters cascade of the audio data for delay-and-sum beamforming applications. Additionally, the configura- tion of the HPS part allows the Pyramic array to be controlled through a Linux based OS. The main purpose of the project is to develop a flexible platform in which real-time echo-location algorithms can be implemented. The effectiveness of the Pyramic array design is illustrated by testing the recorded data with offline direction of arrival algorithms developed at LCAV in EPFL
    corecore