12 research outputs found

    Accuracy-Guaranteed Fixed-Point Optimization in Hardware Synthesis and Processor Customization

    Get PDF
    RÉSUMÉ De nos jours, le calcul avec des nombres fractionnaires est essentiel dans une vaste gamme d’applications de traitement de signal et d’image. Pour le calcul numérique, un nombre fractionnaire peut être représenté à l’aide de l’arithmétique en virgule fixe ou en virgule flottante. L’arithmétique en virgule fixe est largement considérée préférable à celle en virgule flottante pour les architectures matérielles dédiées en raison de sa plus faible complexité d’implémentation. Dans la mise en œuvre du matériel, la largeur de mot attribuée à différents signaux a un impact significatif sur des métriques telles que les ressources (transistors), la vitesse et la consommation d'énergie. L'optimisation de longueur de mot (WLO) en virgule fixe est un domaine de recherche bien connu qui vise à optimiser les chemins de données par l'ajustement des longueurs de mots attribuées aux signaux. Un nombre en virgule fixe est composé d’une partie entière et d’une partie fractionnaire. Il y a une limite inférieure au nombre de bits alloués à la partie entière, de façon à prévenir les débordements pour chaque signal. Cette limite dépend de la gamme de valeurs que peut prendre le signal. Le nombre de bits de la partie fractionnaire, quant à lui, détermine la taille de l'erreur de précision finie qui est introduite dans les calculs. Il existe un compromis entre la précision et l'efficacité du matériel dans la sélection du nombre de bits de la partie fractionnaire. Le processus d'attribution du nombre de bits de la partie fractionnaire comporte deux procédures importantes: la modélisation de l'erreur de quantification et la sélection de la taille de la partie fractionnaire. Les travaux existants sur la WLO ont porté sur des circuits spécialisés comme plate-forme cible. Dans cette thèse, nous introduisons de nouvelles méthodologies, techniques et algorithmes pour améliorer l’implémentation de calculs en virgule fixe dans des circuits et processeurs spécialisés. La thèse propose une approche améliorée de modélisation d’erreur, basée sur l'arithmétique affine, qui aborde certains problèmes des méthodes existantes et améliore leur précision. La thèse introduit également une technique d'accélération et deux algorithmes semi-analytiques pour la sélection de la largeur de la partie fractionnaire pour la conception de circuits spécialisés. Alors que le premier algorithme suit une stratégie de recherche progressive, le second utilise une méthode de recherche en forme d'arbre pour l'optimisation de la largeur fractionnaire. Les algorithmes offrent deux options de compromis entre la complexité de calcul et le coût résultant. Le premier algorithme a une complexité polynomiale et obtient des résultats comparables avec des approches heuristiques existantes. Le second algorithme a une complexité exponentielle, mais il donne des résultats quasi-optimaux par rapport à une recherche exhaustive. Cette thèse propose également une méthode pour combiner l'optimisation de la longueur des mots dans un contexte de conception de processeurs configurables. La largeur et la profondeur des blocs de registres et l'architecture des unités fonctionnelles sont les principaux objectifs ciblés par cette optimisation. Un nouvel algorithme d'optimisation a été développé pour trouver la meilleure combinaison de longueurs de mots et d'autres paramètres configurables dans la méthode proposée. Les exigences de précision, définies comme l'erreur pire cas, doivent être respectées par toute solution. Pour faciliter l'évaluation et la mise en œuvre des solutions retenues, un nouvel environnement de conception de processeur a également été développé. Cet environnement, qui est appelé PolyCuSP, supporte une large gamme de paramètres, y compris ceux qui sont nécessaires pour évaluer les solutions proposées par l'algorithme d'optimisation. L’environnement PolyCuSP soutient l’exploration rapide de l'espace de solution et la capacité de modéliser différents jeux d'instructions pour permettre des comparaisons efficaces.----------ABSTRACT Fixed-point arithmetic is broadly preferred to floating-point in hardware development due to the reduced hardware complexity of fixed-point circuits. In hardware implementation, the bitwidth allocated to the data elements has significant impact on efficiency metrics for the circuits including area usage, speed and power consumption. Fixed-point word-length optimization (WLO) is a well-known research area. It aims to optimize fixed-point computational circuits through the adjustment of the allocated bitwidths of their internal and output signals. A fixed-point number is composed of an integer part and a fractional part. There is a minimum number of bits for the integer part that guarantees overflow and underflow avoidance in each signal. This value depends on the range of values that the signal may take. The fractional word-length determines the amount of finite-precision error that is introduced in the computations. There is a trade-off between accuracy and hardware cost in fractional word-length selection. The process of allocating the fractional word-length requires two important procedures: finite-precision error modeling and fractional word-length selection. Existing works on WLO have focused on hardwired circuits as the target implementation platform. In this thesis, we introduce new methodologies, techniques and algorithms to improve the hardware realization of fixed-point computations in hardwired circuits and customizable processors. The thesis proposes an enhanced error modeling approach based on affine arithmetic that addresses some shortcomings of the existing methods and improves their accuracy. The thesis also introduces an acceleration technique and two semi-analytical fractional bitwidth selection algorithms for WLO in hardwired circuit design. While the first algorithm follows a progressive search strategy, the second one uses a tree-shaped search method for fractional width optimization. The algorithms offer two different time-complexity/cost efficiency trade-off options. The first algorithm has polynomial complexity and achieves comparable results with existing heuristic approaches. The second algorithm has exponential complexity but achieves near-optimal results compared to an exhaustive search. The thesis further proposes a method to combine word-length optimization with application-specific processor customization. The supported datatype word-length, the size of register-files and the architecture of the functional units are the main target objectives to be optimized. A new optimization algorithm is developed to find the best combination of word-length and other customizable parameters in the proposed method. Accuracy requirements, defined as the worst-case error bound, are the key consideration that must be met by any solution. To facilitate evaluation and implementation of the selected solutions, a new processor design environment was developed. This environment, which is called PolyCuSP, supports necessary customization flexibility to realize and evaluate the solutions given by the optimization algorithm. PolyCuSP supports rapid design space exploration and capability to model different instruction-set architectures to enable effective compari

    WCET-Aware Scratchpad Memory Management for Hard Real-Time Systems

    Get PDF
    abstract: Cyber-physical systems and hard real-time systems have strict timing constraints that specify deadlines until which tasks must finish their execution. Missing a deadline can cause unexpected outcome or endanger human lives in safety-critical applications, such as automotive or aeronautical systems. It is, therefore, of utmost importance to obtain and optimize a safe upper bound of each task’s execution time or the worst-case execution time (WCET), to guarantee the absence of any missed deadline. Unfortunately, conventional microarchitectural components, such as caches and branch predictors, are only optimized for average-case performance and often make WCET analysis complicated and pessimistic. Caches especially have a large impact on the worst-case performance due to expensive off- chip memory accesses involved in cache miss handling. In this regard, software-controlled scratchpad memories (SPMs) have become a promising alternative to caches. An SPM is a raw SRAM, controlled only by executing data movement instructions explicitly at runtime, and such explicit control facilitates static analyses to obtain safe and tight upper bounds of WCETs. SPM management techniques, used in compilers targeting an SPM-based processor, determine how to use a given SPM space by deciding where to insert data movement instructions and what operations to perform at those program locations. This dissertation presents several management techniques for program code and stack data, which aim to optimize the WCETs of a given program. The proposed code management techniques include optimal allocation algorithms and a polynomial-time heuristic for allocating functions to the SPM space, with or without the use of abstraction of SPM regions, and a heuristic for splitting functions into smaller partitions. The proposed stack data management technique, on the other hand, finds an optimal set of program locations to evict and restore stack frames to avoid stack overflows, when the call stack resides in a size-limited SPM. In the evaluation, the WCETs of various benchmarks including real-world automotive applications are statically calculated for SPMs and caches in several different memory configurations.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Optimal state estimation and control of space systems under severe uncertainty

    Get PDF
    This thesis presents novel methods and algorithms for state estimation and optimal control under generalised models of uncertainty. Tracking, scheduling, conjunction assessment, as well as trajectory design and analysis, are typically carried out either considering the nominal scenario only or under assumptions and approximations of the underlying uncertainty to keep the computation tractable. However, neglecting uncertainty or not quantifying it properly may result in lengthy design iterations, mission failures, inaccurate estimation of the satellite state, and poorly assessed risk metrics. To overcome these challenges, this thesis proposes approaches to incorporate proper uncertainty treatment in state estimation, navigation and tracking, and trajectory design. First, epistemic uncertainty is introduced as a generalised model to describe partial probabilistic models, ignorance, scarce or conflicting information, and, overall, a larger umbrella of uncertainty structures. Then, new formulations for state estimation, optimal control, and scheduling under mixed aleatory and epistemic uncertainties are proposed to generalise and robustify their current deterministic or purely aleatory counterparts. Practical solution approaches are developed to numerically solve such problems efficiently. Specifically, a polynomial reinitialisation approach for efficient uncertainty propagation is developed to mitigate the stochastic dimensionality in multi-segment problems. For state estimation and navigation, two robust filtering approaches are presented: a generalisation of the particle filtering to epistemic uncertainty exploiting samples’ precomputations; a sequential filtering approach employing a combination of variational inference and importance sampling. For optimal control under uncertainty, direct shooting-like transcriptions with a tunable high-fidelity polynomial representation of the dynamical flow are developed. Uncertainty quantification, orbit determination, and navigation analysis are incorporated in the main optimisation loop to design trajectories that are simultaneously optimal and robust. The methods developed in this thesis are finally applied to a variety of novel test cases, ranging from LEO to deep-space missions, from trajectory design to space traffic management. The epistemic state estimation is employed in the robust estimation of debris’ conjunction analyses and incorporated in a robust Bayesian framework capable of autonomous decision-making. An optimisation-based scheduling method is presented to efficiently allocate resources to heterogeneous ground stations and fusing information coming from different sensors, and it is applied to the optimal tracking of a satellite in highly perturbed very-low Earth orbit, and a low-resource deep-space spacecraft. The optimal control methods are applied to the robust optimisation of an interplanetary low-thrust trajectory to Apophis, and to the robust redesign of a leg of the Europa Clipper tour with an initial infeasibility on the probability of impact with Jupiter’s moon.This thesis presents novel methods and algorithms for state estimation and optimal control under generalised models of uncertainty. Tracking, scheduling, conjunction assessment, as well as trajectory design and analysis, are typically carried out either considering the nominal scenario only or under assumptions and approximations of the underlying uncertainty to keep the computation tractable. However, neglecting uncertainty or not quantifying it properly may result in lengthy design iterations, mission failures, inaccurate estimation of the satellite state, and poorly assessed risk metrics. To overcome these challenges, this thesis proposes approaches to incorporate proper uncertainty treatment in state estimation, navigation and tracking, and trajectory design. First, epistemic uncertainty is introduced as a generalised model to describe partial probabilistic models, ignorance, scarce or conflicting information, and, overall, a larger umbrella of uncertainty structures. Then, new formulations for state estimation, optimal control, and scheduling under mixed aleatory and epistemic uncertainties are proposed to generalise and robustify their current deterministic or purely aleatory counterparts. Practical solution approaches are developed to numerically solve such problems efficiently. Specifically, a polynomial reinitialisation approach for efficient uncertainty propagation is developed to mitigate the stochastic dimensionality in multi-segment problems. For state estimation and navigation, two robust filtering approaches are presented: a generalisation of the particle filtering to epistemic uncertainty exploiting samples’ precomputations; a sequential filtering approach employing a combination of variational inference and importance sampling. For optimal control under uncertainty, direct shooting-like transcriptions with a tunable high-fidelity polynomial representation of the dynamical flow are developed. Uncertainty quantification, orbit determination, and navigation analysis are incorporated in the main optimisation loop to design trajectories that are simultaneously optimal and robust. The methods developed in this thesis are finally applied to a variety of novel test cases, ranging from LEO to deep-space missions, from trajectory design to space traffic management. The epistemic state estimation is employed in the robust estimation of debris’ conjunction analyses and incorporated in a robust Bayesian framework capable of autonomous decision-making. An optimisation-based scheduling method is presented to efficiently allocate resources to heterogeneous ground stations and fusing information coming from different sensors, and it is applied to the optimal tracking of a satellite in highly perturbed very-low Earth orbit, and a low-resource deep-space spacecraft. The optimal control methods are applied to the robust optimisation of an interplanetary low-thrust trajectory to Apophis, and to the robust redesign of a leg of the Europa Clipper tour with an initial infeasibility on the probability of impact with Jupiter’s moon

    A compiler level intermediate representation based binary analysis system and its applications

    Get PDF
    Analyzing and optimizing programs from their executables has received a lot of attention recently in the research community. There has been a tremendous amount of activity in executable-level research targeting varied applications such as security vulnerability analysis, untrusted code analysis, malware analysis, program testing, and binary optimizations. The vision of this dissertation is to advance the field of static analysis of executables and bridge the gap between source-level analysis and executable analysis. The main thesis of this work is scalable static binary rewriting and analysis using compiler-level intermediate representation without relying on the presence of metadata information such as debug or symbolic information. In spite of a significant overlap in the overall goals of several source-code methods and executables-level techniques, several sophisticated transformations that are well-understood and implemented in source-level infrastructures have yet to become available in executable frameworks. It is a well known fact that a standalone executable without any meta data is less amenable to analysis than the source code. Nonetheless, we believe that one of the prime reasons behind the limitations of existing executable frameworks is that current executable frameworks define their own intermediate representations (IR) which are significantly more constrained than an IR used in a compiler. Intermediate representations used in existing binary frameworks lack high level features like abstract stack, variables, and symbols and are even machine dependent in some cases. This severely limits the application of well-understood compiler transformations to executables and necessitates new research to make them applicable. In the first part of this dissertation, we present techniques to convert the binaries to the same high-level intermediate representation that compilers use. We propose methods to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable to an abstract stack. The proposed methods are practical since they do not employ symbolic, relocation, or debug information which are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called \emph{SecondWrite} that uses LLVM as the IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, including several real world programs. In the next part of this work, we demonstrate that several well-known source-level analysis frameworks such as symbolic analysis have limited effectiveness in the executable domain since executables typically lack higher-level semantics such as program variables. The IR should have a precise memory abstraction for an analysis to effectively reason about memory operations. Our first work of recovering a compiler-level representation addresses this limitation by recovering several higher-level semantics information from executables. In the next part of this work, we propose methods to handle the scenarios when such semantics cannot be recovered. First, we propose a hybrid static-dynamic mechanism for recovering a precise and correct memory model in executables in presence of executable-specific artifacts such as indirect control transfers. Next, the enhanced memory model is employed to define a novel symbolic analysis framework for executables that can perform the same types of program analysis as source-level tools. Frameworks hitherto fail to simultaneously maintain the properties of correct representation and precise memory model and ignore memory-allocated variables while defining symbolic analysis mechanisms. We exemplify that our framework is robust, efficient and it significantly improves the performance of various traditional analyses like global value numbering, alias analysis and dependence analysis for executables. Finally, the underlying representation and analysis framework is employed for two separate applications. First, the framework is extended to define a novel static analysis framework, \emph{DemandFlow}, for identifying information flow security violations in program executables. Unlike existing static vulnerability detection methods for executables, DemandFlow analyzes memory locations in addition to symbols, thus improving the precision of the analysis. DemandFlow proposes a novel demand-driven mechanism to identify and precisely analyze only those program locations and memory accesses which are relevant to a vulnerability, thus enhancing scalability. DemandFlow uncovers six previously undiscovered format string and directory traversal vulnerabilities in popular ftp and internet relay chat clients. Next, the framework is extended to implement a platform-specific optimization for embedded processors. Several embedded systems provide the facility of locking one or more lines in the cache. We devise the first method in literature that employs instruction cache locking as a mechanism for improving the average-case run-time of general embedded applications. We demonstrate that the optimal solution for instruction cache locking can be obtained in polynomial time. Since our scheme is implemented inside a binary framework, it successfully addresses the portability concern by enabling the implementation of cache locking at the time of deployment when all the details of the memory hierarchy are available

    Block-level test scheduling under power dissipation constraints

    Get PDF
    As dcvicc technologies such as VLSI and Multichip Module (MCM) become mature, and larger and denser memory ICs arc implemented for high-performancc digital systems, power dissipation becomes a critical factor and can no longer be ignored cither in normal operation of the system or under test conditions. One of the major considerations in test scheduling is the fact that heat dissipated during test application is significantly higher than during normal operation (sometimes 100 - 200% higher). Therefore, this is one of the recent major considerations in test scheduling. Test scheduling is strongly related to test concurrency. Test concurrency is a design property which strongly impacts testability and power dissipation. To satisfy high fault coverage goals with reduced test application time under certain power dissipation constraints, the testing of all components on the system should be performed m parallel to the greatest extent possible. Some theoretical analysis of this problem has been carried out, but only at IC level. The problem was basically described as a compatible test clustering, where the compatibility among tests was given by test resource and power dissipation conflicts at the same time. From an implementation point of view this problem was identified as an Non-Polynomial (NP) complete problem In this thesis, an efficient scheme for overlaying the block-tcsts, called the extended tree growing technique, is proposed together with classical scheduling algorithms to search for power-constrained blocktest scheduling (PTS) profiles m a polynomial time Classical algorithms like listbased scheduling and distribution-graph based scheduling arc employed to tackle at high level the PTS problem. This approach exploits test parallelism under power constraints. This is achieved by overlaying the block-tcst intervals of compatible subcircuits to test as many of them as possible concurrently so that the maximum accumulated power dissipation is balanced and does not exceed the given limit. The test scheduling discipline assumed here is the partitioned testing with run to completion. A constant additive model is employed for power dissipation analysis and estimation throughout the algorithm

    Measuring aberrations in lithographic projection systems with phase wheel targets

    Get PDF
    A significant factor in the degradation of nanolithographic image fidelity is optical wavefront aberration. Aerial image sensitivity to aberrations is currently much greater than in earlier lithographic technologies, a consequence of increased resolution requirements. Optical wavefront tolerances are dictated by the dimensional tolerances of features printed, which require lens designs with a high degree of aberration correction. In order to increase lithographic resolution, lens numerical aperture (NA) must continue to increase and imaging wavelength must decrease. Not only do aberration magnitudes scale inversely with wavelength, but high-order aberrations increase at a rate proportional to NA2 or greater, as do aberrations across the image field. Achieving lithographic-quality diffraction limited performance from an optical system, where the relatively low image contrast is further reduced by aberrations, requires the development of highly accurate in situ aberration measurement. In this work, phase wheel targets are used to generate an optical image, which can then be used to both describe and monitor aberrations in lithographic projection systems. The use of lithographic images is critical in this approach, since it ensures that optical system measurements are obtained during the system\u27s standard operation. A mathematical framework is developed that translates image errors into the Zernike polynomial representation, commonly used in the description of optical aberrations. The wavefront is decomposed into a set of orthogonal basis functions, and coefficients for the set are estimated from image-based measurements. A solution is deduced from multiple image measurements by using a combination of different image sets. Correlations between aberrations and phase wheel image characteristics are modeled based on physical simulation and statistical analysis. The approach uses a well-developed rigorous simulation tool to model significant aspects of lithography processes to assess how aberrations affect the final image. The aberration impact on resulting image shapes is then examined and approximations identified so the aberration computation can be made into a fast compact model form. Wavefront reconstruction examples are presented together with corresponding numerical results. The detailed analysis is given along with empirical measurements and a discussion of measurement capabilities. Finally, the impact of systematic errors in exposure tool parameters is measureable from empirical data and can be removed in the calibration stage of wavefront analysis

    Contribution to the verification of timed automata (determinization, quantitative verification and reachability in networks of automata)

    Get PDF
    Cette thèse porte sur la vérification des automates temporisés, un modèle bien établi pour les systèmes temps-réels. La thèse est constituée de trois parties. La première est dédiée à la déterminisation des automates temporisés, problème qui n'a pas de solution en général. Nous proposons une méthode approchée (sur-approximation, sous-approximation, mélange des deux) fondée sur la construction d'un jeu de sûreté. Cette méthode améliore les approches existantes en combinant leurs avantages respectifs. Nous appliquons ensuite cette méthode de déterminisation à la génération automatique de tests de conformité. Dans la seconde partie, nous prenons en compte des aspects quantitatifs des systèmes temps-réel grâce à une notion de fréquence des états acceptants dans une exécution d'un automate temporisé. Plus précisément, la fréquence d'une exécution est la proportion de temps passée dans les états acceptants. Nous intéressons alors à l'ensemble des fréquences des exécutions d'un automate temporisé pour étudier, par exemple, le vide de langages seuils. Nous montrons ainsi que les bornes de l'ensemble des fréquences sont calculables pour deux classes d'automates temporisés. D'une part, les bornes peuvent être calculées en espace logarithmique par une procédure non-déterministe dans les automates temporisés à une horloge. D'autre part, elles peuvent être calculées en espace polynomial dans les automates temporisés à plusieurs horloges ne contenant pas de cycles forçant la convergence d'horloges. Finalement, nous étudions le problème de l'accessibilité des états acceptants dans des réseaux d'automates temporisés qui communiquent via des files FIFO. Nous considérons tout d'abord des automates temporisés à temps discret, et caractérisons les topologies de réseaux pour lesquelles l'accessibilité est décidable. Cette caractérisation est ensuite étendue aux automates temporisés à temps continu.This thesis is about verification of timed automata, a well-established model for real time systems. The document is structured in three parts. The first part is dedicated to the determinization of timed automata, a problem which has no solution in general. We propose an approximate (over-approximation/under-approximation/mix) method based on the construction of a safety game. This method improves both existing approaches by combining their respective advantages. Then, we apply this determinization approach to the generation of conformance tests. In the second part, we take into account quantitative aspects of real time systems thanks to a notion of frequency of accepting states along executions of timed automata. More precisely, the frequency of a run is the proportion of time elapsed in accepting states. Then, we study the set of frequencies of runs of a timed automaton in order to decide, for example, the emptiness of threshold languages. We thus prove that the bounds of the set of frequencies are computable for two classes of timed automata. On the one hand, we prove that bounds are computable in logarithmic space by a non-deterministic procedure in one-clock timed automata. On the other hand, they can be computed in polynomial space in timed automata with several clocks, but having no cycle that forces the convergence between clocks. Finally, we study the reachability problem in networks of timed automata communicating through FIFO channels. We first consider dicrete timed automata, and characterize topologies of networks for which reachability is decidable. Then, this characterization is extended to dense-time automata.RENNES1-Bibl. électronique (352382106) / SudocSudocFranceF

    A two-stage design framework for optimal spatial packaging of interconnected fluid-thermal systems

    Get PDF
    Optimal spatial packaging of interconnected subsystems and components with coupled physical (thermal, hydraulic, or electromagnetic) interactions, or SPI2, plays a vital role in the functionality, operation, energy usage, and life cycle of practically all engineered systems, from chips to ships to aircraft. However, the highly nonlinear spatial packaging problem, governed by coupled physical phenomena transferring energy through highly complex and diverse geometric interconnects, has largely resisted automation and quickly exceeds human cognitive abilities at moderate complexity levels. The current state-of-the-art in defining an arrangement of these functionally heterogeneous artifacts still largely relies on human intuition and manual spatial placement, limiting system sophistication and extending design timelines. Spatial packaging involves packing and routing, which are separately challenging NP-hard problems. Therefore, solving the coupled packing and routing (PR) problem simultaneously will require disruptive methods to better address pressing related challenges, such as system volume reduction, interconnect length reduction, ensuring non-intersection, and physics considerations. This dissertation presents a novel automated two-stage sequential design framework to perform simultaneous physics-based packing and routing (PR) optimization of fluid-thermal systems. In Stage 1, unique spatially-feasible topologies (i.e., how interconnects and components pass around each other) are enumerated for given fluid-thermal system architecture. It is important to guarantee a feasible initial graph as lumped-parameter physics analyses may fail if components and/or routing paths intersect. Stage 2 begins with a spatially-feasible layout, and optimizes physics-based system performance with respect to component locations, interconnect paths, and other continuous component or system variables (such as sizing or control). A bar-based design representation enables the use of a differentiable geometric projection method (GPM), where gradient-based optimization is used with finite element analysis. In addition to geometric considerations, this method supports optimization based on system behavior by including physics-based (temperature, fluid pressure, head loss, etc.) objectives and constraints. In other words, stage 1 of the framework supports systematic navigation through discrete topology options utilized as initial designs that are then individually optimized in stage 2 using a continuous gradient-based topology optimization method. Thus, both the discrete and continuous design decisions are made sequentially in this framework. The design framework is successfully demonstrated using different 2D case studies such as a hybrid unmanned aerial vehicle (UAV) system, automotive fuel cell (AFC) packaging system, and other complex multi-loop systems. The 3D problem is significantly more challenging than the 2D problem due to vastly more expansive design space and potential features. A review of state-of-the-art methods, challenges, existing gaps, and opportunities are presented for the optimal design of the 3D PR problem. Stage 1 of the framework has been investigated thoroughly for 3D systems in this dissertation. An efficient design framework to represent and enumerate 3D system spatial topologies for a given system architecture is demonstrated using braid and spatial graph theories. After enumeration, the unique spatial topologies are identified by calculating the Yamada polynomials of all the generated spatial graphs. Spatial topologies that have the same Yamada polynomial are categorized together into equivalent classes. Finally, CAD-based 3D system models are generated from these unique topology classes. These 3D models can be utilized in stage 2 as initial designs for 3D multi-physics PR optimization. Current limitations and significantly impactful future directions for this work are outlined. In summary, this novel design automation framework integrates several elements together as a foundation toward a more comprehensive solution of 3D real-world packing and routing problems with both geometric and physics considerations

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations

    Optimization of Operation Sequencing in CAPP Using Hybrid Genetic Algorithm and Simulated Annealing Approach

    Get PDF
    In any CAPP system, one of the most important process planning functions is selection of the operations and corresponding machines in order to generate the optimal operation sequence. In this paper, the hybrid GA-SA algorithm is used to solve this combinatorial optimization NP (Non-deterministic Polynomial) problem. The network representation is adopted to describe operation and sequencing flexibility in process planning and the mathematical model for process planning is described with the objective of minimizing the production time. Experimental results show effectiveness of the hybrid algorithm that, in comparison with the GA and SA standalone algorithms, gives optimal operation sequence with lesser computational time and lesser number of iterations
    corecore