168 research outputs found

    Multi-objective Optimisation of Digital Circuits based on Cell Mapping in an Industrial EDA Flow

    Get PDF
    Modern electronic design automation (EDA) tools can handle the complexity of state-of-the-art electronic systems by decomposing them into smaller blocks or cells, introducing different levels of abstraction and staged design flows. However, throughout each independent-optimised design step, overhead and inefficiency can accumulate in the resulting overall design. Performing design-specific optimisation from a more global viewpoint requires more time due to the larger search space, but has the potential to provide solutions with improved performance. In this work, a fully-automated, multi-objective (MO) EDA flow is introduced to address this issue. It specifically tunes drive strength mapping, preceding physical implementation, through multi-objective population-based search algorithms. Designs are evaluated with respect to their power, performance and area (PPA). The proposed approach is aimed at digital circuit optimisation at the block-level, where it is capable of expanding the design space and offers a set of trade-off solutions for different case-specific utilisation. We have applied the proposed MOEDA framework to ISCAS-85 and EPFL benchmark circuits using a commercial 65nm standard cell library. The experimental results demonstrate how the MOEDA flow enhances the solutions initially generated by the standard digital flow, and how simultaneously a significant improvement in PPA metrics is achieved

    Multi-objective digital circuit block optimisation based on cell mapping in an industrial electronic design automation flow

    Get PDF
    Abstract Modern electronic design automation (EDA) tools can handle the complexity of state‐of‐the‐art electronic systems by decomposing them into smaller blocks or cells, introducing different levels of abstraction and staged design flows. However, throughout each independently optimised design step, overheads and inefficiencies can accumulate in the resulting overall design. Performing design‐specific optimisation from a more global viewpoint requires more time due to the larger search space but has the potential to provide solutions with improved performanc. In this work, a fully‐automated, multi‐objective (MO) EDA flow is introduced to address this issue. It specifically tunes drive strength mapping, prior to physical implementation, through MO population‐based search algorithms. Designs are evaluated with respect to their power, performance and area (PPA). The proposed approach is aimed at digital circuit optimisation at the block level, where it is capable of expanding the design space and offers a set of trade‐off solutions for different case‐specific utilisation. We have applied the proposed multi‐objective electronic design automation flow (MOEDA) framework to ISCAS‐85 and EPFL benchmark circuits by using a commercial 65 nm standard cell library. The experimental results demonstrate how the MOEDA flow enhances the solutions initially generated by the standard digital flow and how simultaneously a significant improvement in PPA metrics is achieved

    Multi-objective Digital VLSI Design Optimisation

    Get PDF
    Modern VLSI design's complexity and density has been exponentially increasing over the past 50 years and recently reached a stage within its development, allowing heterogeneous, many-core systems and numerous functions to be integrated into a tiny silicon die. These advancements have revealed intrinsic physical limits of process technologies in advanced silicon technology nodes. Designers and EDA vendors have to handle these challenges which may otherwise result in inferior design quality, even failures, and lower design yields under time-to-market pressure. Multiple or many design objectives and constraints are emerging during the design process and often need to be dealt with simultaneously. Multi-objective evolutionary algorithms show flexible capabilities in maintaining multiple variable components and factors in uncertain environments. The VLSI design process involves a large number of available parameters both from designs and EDA tools. This provides many potential optimisation avenues where evolutionary algorithms can excel. This PhD work investigates the application of evolutionary techniques for digital VLSI design optimisation. Automated multi-objective optimisation frameworks, compatible with industrial design flows and foundry technologies, are proposed to improve solution performance, expand feasible design space, and handle complex physical floorplan constraints through tuning designs at gate-level. Methodologies for enriching standard cell libraries regarding drive strength are also introduced to cooperate with multi-objective optimisation frameworks, e.g., subsequent hill-climbing, providing a richer pool of solutions optimised for different trade-offs. The experiments of this thesis demonstrate that multi-objective evolutionary algorithms, derived from biological inspirations, can assist the digital VLSI design process, in an industrial design context, to more efficiently search for well-balanced trade-off solutions as well as optimised design space coverage. The expanded drive granularity of standard cells can push the performance of silicon technologies with offering improved solutions regarding critical objectives. The achieved optimisation results can better deliver trade-off solutions regarding power, performance and area metrics than using standard EDA tools alone. This has been not only shown for a single circuit solution but also covered the entire standard-tool-produced design space

    Wearable fusion system for assessment of motor function in lesion-symptom mapping studies

    Get PDF
    Lesion-symptom mapping studies are a critical component of addressing the relationship between brain and behaviour. Recent developments have yielded significant improvements in the imaging and detection of lesion profiles, but the quantification of motor outcomes is still largely performed by subjective and low-resolution standard clinical rating scales. This mismatch means than lesion-symptom mapping studies are limited in scope by scores which lack the necessary accuracy to fully quantify the subcomponents of motor function. The first study conducted aimed to develop a new automated system of motor function which addressed the limitations inherent in the clinical rating scales. A wearable fusion system was designed that included the attachment of inertial sensors to record the kinematics of upper extremity. This was combined with the novel application of mechanomyographic sensors in this field, to enable the quantification of hand/wrist function. Novel outputs were developed for this system which aimed to combine the validity of the clinical rating scales with the high accuracy of measurements possible with a wearable sensor system. This was achieved by the development of a sophisticated classification model which was trained on series of kinematic and myographic measures to classify the clinical rating scale. These classified scores were combined with a series of fine-grained clinical features derived from higher-order sensor metrics. The developed automated system graded the upper-extremity tasks of the Fugl-Meyer Assessment with a mean accuracy of 75\% for gross motor tasks and 66\% for the wrist/hand tasks. This accuracy increased to 85\% and 74\% when distinguishing between healthy and impaired function for each of these tasks. Several clinical features were computed to describe the subcomponents of upper extremity motor function. This fine-grained clinical feature set offers a novel means to complement the low resolution but well-validated standardised clinical rating scales. A second study was performed to utilise the fine-grained clinical feature set calculated in the previous study in a large-scale region-of-interest lesion-symptom mapping study. Statistically significant regions of motor dysfunction were found in the corticospinal tract and the internal capsule, which are consistent with other motor-based lesion-symptom mapping studies. In addition, the cortico-ponto-cerebellar tract was found to be statistically significant when testing with a clinical feature of hand/wrist motor function. This is a novel finding, potentially due to prior studies being limited to quantifying this subcomponent of motor function using standard clinical rating scales. These results indicate the validity and potential of the clinical feature set to provide a more detailed picture of motor dysfunction in lesion-symptom mapping studies.Open Acces

    Separation logic for high-level synthesis

    Get PDF
    High-level synthesis (HLS) promises a significant shortening of the digital hardware design cycle by raising the abstraction level of the design entry to high-level languages such as C/C++. However, applications using dynamic, pointer-based data structures remain difficult to implement well, yet such constructs are widely used in software. Automated optimisations that leverage the memory bandwidth of dedicated hardware implementations by distributing the application data over separate on-chip memories and parallelise the implementation are often ineffective in the presence of dynamic data structures, due to the lack of an automated analysis that disambiguates pointer-based memory accesses. This thesis takes a step towards closing this gap. We explore recent advances in separation logic, a rigorous mathematical framework that enables formal reasoning about the memory access of heap-manipulating programs. We develop a static analysis that automatically splits heap-allocated data structures into provably disjoint regions. Our algorithm focuses on dynamic data structures accessed in loops and is accompanied by automated source-to-source transformations which enable loop parallelisation and physical memory partitioning by off-the-shelf HLS tools. We then extend the scope of our technique to pointer-based memory-intensive implementations that require access to an off-chip memory. The extended HLS design aid generates parallel on-chip multi-cache architectures. It uses the disjointness property of memory accesses to support non-overlapping memory regions by private caches. It also identifies regions which are shared after parallelisation and which are supported by parallel caches with a coherency mechanism and synchronisation, resulting in automatically specialised memory systems. We show up to 15x acceleration from heap partitioning, parallelisation and the insertion of the custom cache system in demonstrably practical applications.Open Acces

    Studies on Spinal Fusion from Computational Modelling to ‘Smart’ Implants

    Full text link
    Low back pain, the worldwide leading cause of disability, is commonly treated with lumbar interbody fusion surgery to address degeneration, instability, deformity, and trauma of the spine. Following fusion surgery, nearly 20% experience complications requiring reoperation while 1 in 3 do not experience a meaningful improvement in pain. Implant subsidence and pseudarthrosis in particular present a multifaceted challenge in the management of a patient’s painful symptoms. Given the diversity of fusion approaches, materials, and instrumentation, further inputs are required across the treatment spectrum to prevent and manage complications. This thesis comprises biomechanical studies on lumbar spinal fusion that provide new insights into spinal fusion surgery from preoperative planning to postoperative monitoring. A computational model, using the finite element method, is developed to quantify the biomechanical impact of temporal ossification on the spine, examining how the fusion mass stiffness affects loads on the implant and subsequent subsidence risk, while bony growth into the endplates affects load-distribution among the surrounding spinal structures. The computational modelling approach is extended to provide biomechanical inputs to surgical decisions regarding posterior fixation. Where a patient is not clinically pre-disposed to subsidence or pseudarthrosis, the results suggest unilateral fixation is a more economical choice than bilateral fixation to stabilise the joint. While finite element modelling can inform pre-surgical planning, effective postoperative monitoring currently remains a clinical challenge. Periodic radiological follow-up to assess bony fusion is subjective and unreliable. This thesis describes the development of a ‘smart’ interbody cage capable of taking direct measurements from the implant for monitoring fusion progression and complication risk. Biomechanical testing of the ‘smart’ implant demonstrated its ability to distinguish between graft and endplate stiffness states. The device is prepared for wireless actualisation by investigating sensor optimisation and telemetry. The results show that near-field communication is a feasible approach for wireless power and data transfer in this setting, notwithstanding further architectural optimisation required, while a combination of strain and pressure sensors will be more mechanically and clinically informative. Further work in computational modelling of the spine and ‘smart’ implants will enable personalised healthcare for low back pain, and the results presented in this thesis are a step in this direction

    Advances on Mechanics, Design Engineering and Manufacturing III

    Get PDF
    This open access book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Advanced Manufacturing (JCM 2020), held as a web conference on June 2–4, 2020. It reports on cutting-edge topics in product design and manufacturing, such as industrial methods for integrated product and process design; innovative design; and computer-aided design. Further topics covered include virtual simulation and reverse engineering; additive manufacturing; product manufacturing; engineering methods in medicine and education; representation techniques; and nautical, aeronautics and aerospace design and modeling. The book is organized into four main parts, reflecting the focus and primary themes of the conference. The contributions presented here not only provide researchers, engineers and experts in a range of industrial engineering subfields with extensive information to support their daily work; they are also intended to stimulate new research directions, advanced applications of the methods discussed and future interdisciplinary collaborations

    Genetic Improvement of Software for Energy E ciency in Noisy and Fragmented Eco-Systems

    Get PDF
    Software has made its way to every aspect of our daily life. Users of smart devices expect almost continuous availability and uninterrupted service. However, such devices operate on restricted energy resources. As energy eficiency of software is relatively a new concern for software practitioners, there is a lack of knowledge and tools to support the development of energy eficient software. Optimising the energy consumption of software requires measuring or estimating its energy use and then optimising it. Generalised models of energy behaviour suffer from heterogeneous and fragmented eco-systems (i.e. diverse hardware and operating systems). The nature of such optimisation environments favours in-vivo optimisation which provides the ground-truth for energy behaviour of an application on a given platform. One key challenge in in-vivo energy optimisation is noisy energy readings. This is because complete isolation of the effects of software optimisation is simply infeasible, owing to random and systematic noise from the platform. In this dissertation we explore in-vivo optimisation using Genetic Improvement of Software (GI) for energy eficiency in noisy and fragmented eco-systems. First, we document expected and unexpected technical challenges and their solutions when conducting energy optimisation experiments. This can be used as guidelines for software practitioners when conducting energy related experiments. Second, we demonstrate the technical feasibility of in-vivo energy optimisation using GI on smart devices. We implement a new approach for mitigating noisy readings based on simple code rewrite. Third, we propose a new conceptual framework to determine the minimum number of samples required to show significant differences between software variants competing in tournaments. We demonstrate that the number of samples can vary drastically between different platforms as well as from one point of time to another within a single platform. It is crucial to take into consideration these observations when optimising in the wild or across several devices in a control environment. Finally, we implement a new validation approach for energy optimisation experiments. Through experiments, we demonstrate that the current validation approaches can mislead software practitioners to draw wrong conclusions. Our approach outperforms the current validation techniques in terms of specificity and sensitivity in distinguishing differences between validation solutions.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Navigating the Landscape for Real-time Localisation and Mapping for Robotics, Virtual and Augmented Reality

    Get PDF
    Visual understanding of 3D environments in real-time, at low power, is a huge computational challenge. Often referred to as SLAM (Simultaneous Localisation and Mapping), it is central to applications spanning domestic and industrial robotics, autonomous vehicles, virtual and augmented reality. This paper describes the results of a major research effort to assemble the algorithms, architectures, tools, and systems software needed to enable delivery of SLAM, by supporting applications specialists in selecting and configuring the appropriate algorithm and the appropriate hardware, and compilation pathway, to meet their performance, accuracy, and energy consumption goals. The major contributions we present are (1) tools and methodology for systematic quantitative evaluation of SLAM algorithms, (2) automated, machine-learning-guided exploration of the algorithmic and implementation design space with respect to multiple objectives, (3) end-to-end simulation tools to enable optimisation of heterogeneous, accelerated architectures for the specific algorithmic requirements of the various SLAM algorithmic approaches, and (4) tools for delivering, where appropriate, accelerated, adaptive SLAM solutions in a managed, JIT-compiled, adaptive runtime context.Comment: Proceedings of the IEEE 201
    corecore