7,384 research outputs found

    A Survey of Prediction and Classification Techniques in Multicore Processor Systems

    Get PDF
    In multicore processor systems, being able to accurately predict the future provides new optimization opportunities, which otherwise could not be exploited. For example, an oracle able to predict a certain application\u27s behavior running on a smart phone could direct the power manager to switch to appropriate dynamic voltage and frequency scaling modes that would guarantee minimum levels of desired performance while saving energy consumption and thereby prolonging battery life. Using predictions enables systems to become proactive rather than continue to operate in a reactive manner. This prediction-based proactive approach has become increasingly popular in the design and optimization of integrated circuits and of multicore processor systems. Prediction transforms from simple forecasting to sophisticated machine learning based prediction and classification that learns from existing data, employs data mining, and predicts future behavior. This can be exploited by novel optimization techniques that can span across all layers of the computing stack. In this survey paper, we present a discussion of the most popular techniques on prediction and classification in the general context of computing systems with emphasis on multicore processors. The paper is far from comprehensive, but, it will help the reader interested in employing prediction in optimization of multicore processor systems

    Recent developments in structural sensitivity analysis

    Get PDF
    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues

    Breast cancer diagnosis: a survey of pre-processing, segmentation, feature extraction and classification

    Get PDF
    Machine learning methods have been an interesting method in the field of medical for many years, and they have achieved successful results in various fields of medical science. This paper examines the effects of using machine learning algorithms in the diagnosis and classification of breast cancer from mammography imaging data. Cancer diagnosis is the identification of images as cancer or non-cancer, and this involves image preprocessing, feature extraction, classification, and performance analysis. This article studied 93 different references mentioned in the previous years in the field of processing and tries to find an effective way to diagnose and classify breast cancer. Based on the results of this research, it can be concluded that most of today’s successful methods focus on the use of deep learning methods. Finding a new method requires an overview of existing methods in the field of deep learning methods in order to make a comparison and case study

    Reduced-order modeling of power electronics components and systems

    Get PDF
    This dissertation addresses the seemingly inevitable compromise between modeling fidelity and simulation speed in power electronics. Higher-order effects are considered at the component and system levels. Order-reduction techniques are applied to provide insight into accurate, computationally efficient component-level (via reduced-order physics-based model) and system-level simulations (via multiresolution simulation). Proposed high-order models, verified with hardware measurements, are, in turn, used to verify the accuracy of final reduced-order models for both small- and large-signal excitations. At the component level, dynamic high-fidelity magnetic equivalent circuits are introduced for laminated and solid magnetic cores. Automated linear and nonlinear order-reduction techniques are introduced for linear magnetic systems, saturated systems, systems with relative motion, and multiple-winding systems, to extract the desired essential system dynamics. Finite-element models of magnetic components incorporating relative motion are set forth and then reduced. At the system level, a framework for multiresolution simulation of switching converters is developed. Multiresolution simulation provides an alternative method to analyze power converters by providing an appropriate amount of detail based on the time scale and phenomenon being considered. A detailed full-order converter model is built based upon high-order component models and accurate switching transitions. Efficient order-reduction techniques are used to extract several lower-order models for the desired resolution of the simulation. This simulation framework is extended to higher-order converters, converters with nonlinear elements, and closed-loop systems. The resulting rapid-to-integrate component models and flexible simulation frameworks could form the computational core of future virtual prototyping design and analysis environments for energy processing units

    Non-Uniform Planar Slicing for Robot-Based Additive Manufacturing

    Get PDF
    Planar slicing algorithms with constant layer thickness are widely implemented for geometry processing in Additive Manufacturing (AM). Since the build direction is fixed, a staircase effect is produced, decreasing the final surface finish. Also, support structures are required for overhanging portions. To overcome such limits, AM is combined with manipulators and working tables with multiple degrees of freedom. This is called Robot-Based Additive Manufacturing (RBAM) and it aims to increase the manufacturing flexibility of traditional printers, enabling the deposition of material in multiple directions. In particular, the deposition direction is changed at each layer requiring non-uniform thickness slicing. The total number of layers, as well as the volume of the support structures and the manufacturing time are reduced, while the surface finish and mechanical performance of the final product are increased. This paper presents an algorithm for non-uniform planar slicing developed in Rhinoceros and Grasshopper. It processes the input geometry and uses parameters to capture manufacturing limits. It mostly targets curved geometries to remove the need for support structures, also increasing the part quality

    Optimal design of mesostructured materials under uncertainty

    Get PDF
    The main objective of the topology optimization is to fulfill the objective function with the minimum amount of material. This reduces the overall cost of the structure and at the same time reduces the assembly, manufacturing and maintenance costs because of the reduced number of parts in the final structure. The concept of reliability analysis can be incorporated into the deterministic topology optimization method; this incorporated scheme is referred to as Reliability-based Topology Optimization (RBTO). In RBTO, the statistical nature of constraints and design problems are defined in the objective function and probabilistic constraint. The probabilistic constraint can specify the required reliability level of the system. In practical applications, however, finding global optimum in the presence of uncertainty is a difficult and computationally intensive task, since for every possible design a full stochastic analysis has to be performed for estimating various statistical parameters. Efficient methodologies are therefore required for the solution of the stochastic part and the optimization part of the design process. This research will explore a reliability-based synthesis method which estimates all the statistical parameters and finds the optimum while being less computationally intensive. The efficiency of the proposed method is achieved with the combination of topology optimization and stochastic approximation which utilizes a sampling technique such as Latin Hypercube Sampling (LHS) and surrogate modeling techniques such as Local Regression and Classification using Artificial Neural Networks (ANN). Local regression is comparatively less computationally intensive and produces good results in case of low probability of failures whereas Classification is particularly useful in cases where the reliability of failure has to be estimated with disjoint failure domains. Because classification using ANN is comparatively more computationally demanding than Local regression, classification is only used when local regression fails to give the desired level of goodness of fit. Nevertheless, classification is an indispensible tool in estimating the probability of failure when the failure domain is discontinuous. Representative examples will be demonstrated where the method is used to design customized meso-scale truss structures and a macro-scale hydrogen storage tank. The final deliverable from this research will be a less computationally intensive and robust RBTO procedure that can be used for design of truss structures with variable design parameters and force and boundary conditions.M.S.Committee Chair: Choi, Seung-Kyum; Committee Member: Muhanna, Rafi; Committee Member: Rosen, Davi

    Data-efficient machine learning for design and optimisation of complex systems

    Get PDF

    Manufacturability Analysis of Thermally-Enhanced Polymer Composite Heat Exchangers

    Get PDF
    Thermally-enhanced polymer composite heat exchangers are an attractive alternative for applications such as the use of seawater as a cooling medium and other corrosive environments that traditionally use expensive exotic metallic alloys, but a number of manufacturing challenges exist. The goal of this thesis is to develop an understanding of the manufacturing feasibility, in particular mold filling and fiber orientation, of utilizing thermally-enhanced polymer composites and injection molding to manufacture polymer heat exchangers. To best predict mold filling feasibility, this thesis proposes developing an explicit construction of the boundary, represented as a surface based on the parameter space, which separates the feasible and infeasible design space. The feasibility boundary for injection molding in terms of the design parameters is quite complex due to the highly nonlinear process physics, which, consequently, makes molding simulation computationally intensive and time consuming. This thesis presents a new approach for the explicit construction of a moldability-based feasibility boundary based on intelligent Design of Experiments and adaptive control techniques to minimize the number or computation experiments needed to build an accurate model of the feasibility boundary. Additionally, to improve the flexibility of the mold filling prediction framework to changes in overall heat exchanger design, a model simplification approach is presented to predict mold filling for general finned-plate designs by determining an equivalent flat plate representation and utilizing a developed flat plate mold filling metamodel to estimate mold filling. Finally, a fiber orientation measurement methodology is presented for experimentally determining fiber orientation behavior for sample heat exchanger geometries that develops both a local and global understanding of the fiber orientation behavior and compares thesis findings to simulation predictions. The work presented in this thesis significantly advances the understanding of manufacturability considerations for utilizing thermally-enhanced polymer composites in heat exchanger applications and is useful in design exploration, optimization, and decision-making approaches

    SYSTEM-ON-A-CHIP (SOC)-BASED HARDWARE ACCELERATION FOR HUMAN ACTION RECOGNITION WITH CORE COMPONENTS

    Get PDF
    Today, the implementation of machine vision algorithms on embedded platforms or in portable systems is growing rapidly due to the demand for machine vision in daily human life. Among the applications of machine vision, human action and activity recognition has become an active research area, and market demand for providing integrated smart security systems is growing rapidly. Among the available approaches, embedded vision is in the top tier; however, current embedded platforms may not be able to fully exploit the potential performance of machine vision algorithms, especially in terms of low power consumption. Complex algorithms can impose immense computation and communication demands, especially action recognition algorithms, which require various stages of preprocessing, processing and machine learning blocks that need to operate concurrently. The market demands embedded platforms that operate with a power consumption of only a few watts. Attempts have been mad to improve the performance of traditional embedded approaches by adding more powerful processors; this solution may solve the computation problem but increases the power consumption. System-on-a-chip eld-programmable gate arrays (SoC-FPGAs) have emerged as a major architecture approach for improving power eciency while increasing computational performance. In a SoC-FPGA, an embedded processor and an FPGA serving as an accelerator are fabricated in the same die to simultaneously improve power consumption and performance. Still, current SoC-FPGA-based vision implementations either shy away from supporting complex and adaptive vision algorithms or operate at very limited resolutions due to the immense communication and computation demands. The aim of this research is to develop a SoC-based hardware acceleration workflow for the realization of advanced vision algorithms. Hardware acceleration can improve performance for highly complex mathematical calculations or repeated functions. The performance of a SoC system can thus be improved by using hardware acceleration method to accelerate the element that incurs the highest performance overhead. The outcome of this research could be used for the implementation of various vision algorithms, such as face recognition, object detection or object tracking, on embedded platforms. The contributions of SoC-based hardware acceleration for hardware-software codesign platforms include the following: (1) development of frameworks for complex human action recognition in both 2D and 3D; (2) realization of a framework with four main implemented IPs, namely, foreground and background subtraction (foreground probability), human detection, 2D/3D point-of-interest detection and feature extraction, and OS-ELM as a machine learning algorithm for action identication; (3) use of an FPGA-based hardware acceleration method to resolve system bottlenecks and improve system performance; and (4) measurement and analysis of system specications, such as the acceleration factor, power consumption, and resource utilization. Experimental results show that the proposed SoC-based hardware acceleration approach provides better performance in terms of the acceleration factor, resource utilization and power consumption among all recent works. In addition, a comparison of the accuracy of the framework that runs on the proposed embedded platform (SoCFPGA) with the accuracy of other PC-based frameworks shows that the proposed approach outperforms most other approaches
    • …
    corecore