670 research outputs found

    Intelligent intrusion detection in low power IoTs

    Get PDF
    Security and privacy of data are one of the prime concerns in today’s Internet of Things (IoT). Conventional security techniques like signature-based detection of malware and regular updates of a signature database are not feasible solutions as they cannot secure such systems effectively, having limited resources. Programming languages permitting immediate memory accesses through pointers often result in applications having memory-related errors, which may lead to unpredictable failures and security vulnerabilities. Furthermore, energy efficient IoT devices running on batteries cannot afford the implementation of cryptography algorithms as such techniques have significant impact on the system power consumption. Therefore, in order to operate IoT in a secure manner, the system must be able to detect and prevent any kind of intrusions before the network (i.e., sensor nodes and base station) is destabilised by the attackers. In this article, we have presented an intrusion detection and prevention mechanism by implementing an intelligent security architecture using random neural networks (RNNs). The application’s source code is also instrumented at compile time in order to detect out-of-bound memory accesses. It is based on creating tags, to be coupled with each memory allocation and then placing additional tag checking instructions for each access made to the memory. To validate the feasibility of the proposed security solution, it is implemented for an existing IoT system and its functionality is practically demonstrated by successfully detecting the presence of any suspicious sensor node within the system operating range and anomalous activity in the base station with an accuracy of 97.23%. Overall, the proposed security solution has presented a minimal performance overhead.</jats:p

    Optimisation of flow chemistry: tools and algorithms

    Get PDF
    The coupling of flow chemistry with automated laboratory equipment has become increasingly common and used to support the efficient manufacturing of chemicals. A variety of reactors and analytical techniques have been used in such configurations for investigating and optimising the processing conditions of different reactions. However, the integrated reactors used thus far have been constrained to single phase mixing, greatly limiting the scope of reactions for such studies. This thesis presents the development and integration of a millilitre-scale CSTR, the fReactor, that is able to process multiphase flows, thus broadening the range of reactions susceptible of being investigated in this way. Following a thorough review of the literature covering the uses of flow chemistry and lab-scale reactor technology, insights on the design of a temperature-controlled version of the fReactor with an accuracy of ±0.3 ºC capable of cutting waiting times 44% when compared to the previous reactor are given. A demonstration of its use is provided for which the product of a multiphasic reaction is analysed automatically under different reaction conditions according to a sampling plan. Metamodeling and cross-validation techniques are applied to these results, where single and multi-objective optimisations are carried out over the response surface models of different metrics to illustrate different trade-offs between them. The use of such techniques allowed reducing the error incurred by the common least squares polynomial fitting by over 12%. Additionally, a demonstration of the fReactor as a tool for synchrotron X-Ray Diffraction is also carried out by means of successfully assessing the change in polymorph caused by solvent switching, this being the first synchrotron experiment using this sort of device. The remainder of the thesis focuses on applying the same metamodeling and cross-validation techniques used previously, in the optimisation of the design of a miniaturised continuous oscillatory baffled reactor. However, rather than using these techniques with physical experimentation, they are used in conjunction with computational fluid dynamics. This reactor shows a better residence time distribution than its CSTR counterparts. Notably, the effect of the introduction of baffle offsetting in a plate design of the reactor is identified as a key parameter in giving a narrow residence time distribution and good mixing. Under this configuration it is possible to reduce the RTD variance by 45% and increase the mixing efficiency by 60% when compared to the best performing opposing baffles geometry

    Technical Design Report for the PANDA Micro Vertex Detector

    Get PDF
    This document illustrates the technical layout and the expected performance of the Micro Vertex Detector (MVD) of the PANDA experiment. The MVD will detect charged particles as close as possible to the interaction zone. Design criteria and the optimisation process as well as the technical solutions chosen are discussed and the results of this process are subjected to extensive Monte Carlo physics studies. The route towards realisation of the detector is outlined

    Gallium arsenide design methodology and testing of a systolic floating point processing element

    Get PDF
    Thesis (M.E.Sc.) -- University of Adelaide, Dept. of Electrical and Electronic Engineering, 199

    Model-based design of correct controllers for dynamically reconfigurable architectures

    Get PDF
    International audienceDynamically reconfigurable hardware has been identified as a promising solution for the design of energy efficient embedded systems. However, its adoption is limited by the costly design effort including verification and validation, which is even more complex than for non dynamically reconfigurable systems. In this paper, we propose a tool-supported formal method to automatically design a correct-by-construction control of the reconfiguration. By representing system behaviors with automata, we exploit automated algorithms to synthesize controllers that safely enforce reconfiguration strategies formulated as properties to be satisfied by control. We design generic modeling patterns for a class of reconfigurable architectures, taking into account both hardware architecture and applications, as well as relevant control objectives. We validate our approach on two case studies implemented on FPGAs

    Datapath and memory co-optimization for FPGA-based computation

    No full text
    With the large resource densities available on modern FPGAs it is often the available memory bandwidth that limits the parallelism (and therefore performance) that can be achieved. For this reason the focus of this thesis is the development of an integrated scheduling and memory optimisation methodology to allow high levels of parallelism to be exploited in FPGA based designs. A manual translation from C to hardware is first investigated as a case study, exposing a number of potential optimisation techniques that have not been exploited in existing work. An existing outer loop pipelining approach, originally developed for VLIW processors, is extended and adapted for application to FPGAs. The outer loop pipelining methodology is first developed to use a fixed memory subsystem design and then extended to automate the optimisation of the memory subsystem. This approach allocates arrays to physical memories and selects the set of data reuse structures to implement to match the available and required memory bandwidths as the pipelining search progresses. The final extension to this work is to include the partitioning of data from a single array across multiple physical memories, increasing the number of memory ports through which data my be accessed. The facility for loop unrolling is also added to increase the potential for parallelism and exploit the additional bandwidth that partitioning can provide. We describe our approach based on formal methodologies and present the results achieved when these methods are applied to a number of benchmarks. These results show the advantages of both extending pipelining to levels above the innermost loop and the co-optimisation of the datapath and memory subsystem

    A survey of emerging architectural techniques for improving cache energy consumption

    Get PDF
    The search goes on for another ground breaking phenomenon to reduce the ever-increasing disparity between the CPU performance and storage. There are encouraging breakthroughs in enhancing CPU performance through fabrication technologies and changes in chip designs but not as much luck has been struck with regards to the computer storage resulting in material negative system performance. A lot of research effort has been put on finding techniques that can improve the energy efficiency of cache architectures. This work is a survey of energy saving techniques which are grouped on whether they save the dynamic energy, leakage energy or both. Needless to mention, the aim of this work is to compile a quick reference guide of energy saving techniques from 2013 to 2016 for engineers, researchers and students

    Modeling and Mapping of Optimized Schedules for Embedded Signal Processing Systems

    Get PDF
    The demand for Digital Signal Processing (DSP) in embedded systems has been increasing rapidly due to the proliferation of multimedia- and communication-intensive devices such as pervasive tablets and smart phones. Efficient implementation of embedded DSP systems requires integration of diverse hardware and software components, as well as dynamic workload distribution across heterogeneous computational resources. The former implies increased complexity of application modeling and analysis, but also brings enhanced potential for achieving improved energy consumption, cost or performance. The latter results from the increased use of dynamic behavior in embedded DSP applications. Furthermore, parallel programming is highly relevant in many embedded DSP areas due to the development and use of Multiprocessor System-On-Chip (MPSoC) technology. The need for efficient cooperation among different devices supporting diverse parallel embedded computations motivates high-level modeling that expresses dynamic signal processing behaviors and supports efficient task scheduling and hardware mapping. Starting with dynamic modeling, this thesis develops a systematic design methodology that supports functional simulation and hardware mapping of dynamic reconfiguration based on Parameterized Synchronous Dataflow (PSDF) graphs. By building on the DIF (Dataflow Interchange Format), which is a design language and associated software package for developing and experimenting with dataflow-based design techniques for signal processing systems, we have developed a novel tool for functional simulation of PSDF specifications. This simulation tool allows designers to model applications in PSDF and simulate their functionality, including use of the dynamic parameter reconfiguration capabilities offered by PSDF. With the help of this simulation tool, our design methodology helps to map PSDF specifications into efficient implementations on field programmable gate arrays (FPGAs). Furthermore, valid schedules can be derived from the PSDF models at runtime to adapt hardware configurations based on changing data characteristics or operational requirements. Under certain conditions, efficient quasi-static schedules can be applied to reduce overhead and enhance predictability in the scheduling process. Motivated by the fact that scheduling is critical to performance and to efficient use of dynamic reconfiguration, we have focused on a methodology for schedule design, which complements the emphasis on automated schedule construction in the existing literature on dataflow-based design and implementation. In particular, we have proposed a dataflow-based schedule design framework called the dataflow schedule graph (DSG), which provides a graphical framework for schedule construction based on dataflow semantics, and can also be used as an intermediate representation target for automated schedule generation. Our approach to applying the DSG in this thesis emphasizes schedule construction as a design process rather than an outcome of the synthesis process. Our approach employs dataflow graphs for representing both application models and schedules that are derived from them. By providing a dataflow-integrated framework for unambiguously representing, analyzing, manipulating, and interchanging schedules, the DSG facilitates effective codesign of dataflow-based application models and schedules for execution of these models. As multicore processors are deployed in an increasing variety of embedded image processing systems, effective utilization of resources such as multiprocessor systemon-chip (MPSoC) devices, and effective handling of implementation concerns such as memory management and I/O become critical to developing efficient embedded implementations. However, the diversity and complexity of applications and architectures in embedded image processing systems make the mapping of applications onto MPSoCs difficult. We help to address this challenge through a structured design methodology that is built upon the DSG modeling framework. We refer to this methodology as the DEIPS methodology (DSG-based design and implementation of Embedded Image Processing Systems). The DEIPS methodology provides a unified framework for joint consideration of DSG structures and the application graphs from which they are derived, which allows designers to integrate considerations of parallelization and resource constraints together with the application modeling process. We demonstrate the DEIPS methodology through cases studies on practical embedded image processing systems

    A Digital Microfluidics Platform for Loop-Mediated Isothermal Amplification of DNA

    Get PDF
    Digital Microfluidics (DMF) is an innovative technology for liquid manipulation at microliter- to picoliter-scale, with tremendous potential of application in biosensing. DMF allows maneuvering single droplets over an electrode array, by means of electrowetting-on-dielectric (EWOD), that allows changing the contact angle of a droplet over a dielectric. Each droplet is thus considered a microreactor, with an unparalleled potential to perform chemical and biological reactions. Several aspects inherent to DMF platforms, such as multiplex assay capability and integration capability, make them promising for lab-on-chip and point-of-care (PoC) applications, e.g. DNA amplification assays or disease detection. DNA detection strategies for PoC have been profiting from recent development of isothermal amplification schemes, of which Loop-mediated Isothermal Amplification (LAMP) is a major methodology, allowing a 109-fold amplification efficiency in one hour. Here, I demonstrate for the first time the effective coupling of DMF and LAMP, resulting in a DMF device capable of performing LAMP reactions. This novel DMF platform has been developed and characterised, which allows successful amplification of a c-Myc gene fragment by LAMP. Precise temperature control is achieved by using a transparent heating element, connected to a looping feedback control system. This platform is able to amplify just 0.5 ng/μL of the target DNA, in only 45 minutes, for a device temperature of 65 °C and a reaction volume of 1.62 μL, one of the lowest volumes ever reported. Moreover, the electrophoretic analysis indicates that the amplification efficiency of the on-chip LAMP is considerably higher than that from the bench-top reaction
    • …
    corecore