413 research outputs found

    Algorithmic studies on PCB routing

    Get PDF
    As IC technology advances, the package size keeps shrinking while the pin count of a package keeps increasing. A modern IC package can have a pin count of thousands. As a result, a complex printed circuit board (PCB) can host more than ten thousand signal nets. Such a huge pin count and net count make manual design of packages and PCBs an extremely time-consuming and error-prone task. On the other hand, increasing clock frequency imposes various physical constraints on PCB routing. These constraints make traditional IC and PCB routers not applicable to modern PCB routing. To the best of our knowledge, there is no mature commercial or academic automated router that handles these constraints well. Therefore, automated PCB routers that are tuned to handle such constraints become a necessity in modern design. In this dissertation, we propose novel algorithms for three major aspects of PCB routing: escape routing, area routing and layer assignment. Escape routing for packages and PCBs has been studied extensively in the past. Network flow is pervasively used to model this problem. However, previous studies are incomplete in two senses. First, none of the previous works correctly model the diagonal capacity, which is essential for 45 degree routing in most packages and PCBs. As a result, existing algorithms may either produce routing solutions that violate the diagonal capacity or fail to output a legal routing even though one exists. Second, few works discuss the escape routing problem of differential pairs. In high-performance PCBs, many critical nets use differential pairs to transmit signals. How to escape differential pairs from a pin array is an important issue that has received too little attention in the literature. In this dissertation, we propose a new network flow model that guarantees the correctness when diagonal capacity is taken into consideration. This model leads to the first optimal algorithm for escape routing. We also extend our model to handle missing pins. We then propose two algorithms for the differential pair escape routing problem. The first one computes the optimal routing for a single differential pair while the second one is able to simultaneously route multiple differential pairs considering both routability and wire length. We then propose a two-stage routing scheme based on the two algorithms. In our routing scheme, the second algorithm is used to generate initial routing and the first algorithm is used to perform rip-up and reroute. Length-constrained routing is another very important problem for PCB routing. Previous length-constrained routers all have assumptions on the routing topology. We propose a routing scheme that is free of any restriction on the routing topology. The novelty of our proposed routing scheme is that we view the length-constrained routing problem as an area assignment problem and use a placement structure to help transform the area assignment problem into a mathematical programming problem. Experimental results show that our routing scheme can handle practical designs that previous routers cannot handle. For designs that they could handle, our router runs much faster. Length-constrained routing requires the escaped nets to have matching ordering along the boundaries of the pin arrays. However, in some practical designs, the net ordering might be mismatched. To address this issue, we propose a preprocessing step to untangle such twisted nets. We also introduce a practical routing style, which we call single-detour routing, to simplify the untangling problem. We discover a necessary and sufficient condition for the existence of single-detour routing solutions and present a dynamic programming based algorithm that optimally solves the problem. By integrating our algorithm into the bus router in a length-constrained router, we show that many routing problems that cannot be solved previously can now be solved with insignificant increase in runtime. The nets on a PCB are usually grouped into buses. Because of the high pin density of the packages, the buses need to be assigned into multiple routing layers. We propose a layer assignment algorithm to assign a set of buses into multiple layers without causing any conflict. Our algorithm guarantees to produce a layer assignment with minimum number of layers. The key idea is to transform the layer assignment problem into a bipartite matching problem. This research result is an improvement over a previous work, which is optimal for only one layer

    A new three phase method (SDP method) for the multi-objective vehicle routing problem with simultaneous delivery and pickup (VRPSDP)

    Get PDF
    Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP

    Test analysis & fault simulation of microfluidic systems

    Get PDF
    This work presents a design, simulation and test methodology for microfluidic systems, with particular focus on simulation for test. A Microfluidic Fault Simulator (MFS) has been created based around COMSOL which allows a fault-free system model to undergo fault injection and provide test measurements. A post MFS test analysis procedure is also described.A range of fault-free system simulations have been cross-validated to experimental work to gauge the accuracy of the fundamental simulation approach prior to further investigation and development of the simulation and test procedure.A generic mechanism, termed a fault block, has been developed to provide fault injection and a method of describing a low abstraction behavioural fault model within the system. This technique has allowed the creation of a fault library containing a range of different microfluidic fault conditions. Each of the fault models has been cross-validated to experimental conditions or published results to determine their accuracy.Two test methods, namely, impedance spectroscopy and Levich electro-chemical sensors have been investigated as general methods of microfluidic test, each of which has been shown to be sensitive to a multitude of fault. Each method has successfully been implemented within the simulation environment and each cross-validated by first-hand experimentation or published work.A test analysis procedure based around the Neyman-Pearson criterion has been developed to allow a probabilistic metric for each test applied for a given fault condition, providing a quantitive assessment of each test. These metrics are used to analyse the sensitivity of each test method, useful when determining which tests to employ in the final system. Furthermore, these probabilistic metrics may be combined to provide a fault coverage metric for the complete system.The complete MFS method has been applied to two system cases studies; a hydrodynamic “Y” channel and a flow cytometry system for prognosing head and neck cancer.Decision trees are trained based on the test measurement data and fault conditions as a means of classifying the systems fault condition state. The classification rules created by the decision trees may be displayed graphically or as a set of rules which can be loaded into test instrumentation. During the course of this research a high voltage power supply instrument has been developed to aid electro-osmotic experimentation and an impedance spectrometer to provide embedded test

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Layoutautomatisierung im analogen IC-Entwurf mit formalisiertem und nicht-formalisiertem Expertenwissen

    Get PDF
    After more than three decades of electronic design automation, most layouts for analog integrated circuits are still handcrafted in a laborious manual fashion today. Obverse to the highly automated synthesis tools in the digital domain (coping with the quantitative difficulty of packing more and more components onto a single chip – a desire well known as More Moore), analog layout automation struggles with the many diverse and heavily correlated functional requirements that turn the analog design problem into a More than Moore challenge. Facing this qualitative complexity, seasoned layout engineers rely on their comprehensive expert knowledge to consider all design constraints that uncompromisingly need to be satisfied. This usually involves both formally specified and nonformally communicated pieces of expert knowledge, which entails an explicit and implicit consideration of design constraints, respectively. Existing automation approaches can be basically divided into optimization algorithms (where constraint consideration occurs explicitly) and procedural generators (where constraints can only be taken into account implicitly). As investigated in this thesis, these two automation strategies follow two fundamentally different paradigms denoted as top-down automation and bottom-up automation. The major trait of top-down automation is that it requires a thorough formalization of the problem to enable a self-intelligent solution finding, whereas a bottom-up automatism –controlled by parameters– merely reproduces solutions that have been preconceived by a layout expert in advance. Since the strengths of one paradigm may compensate the weaknesses of the other, it is assumed that a combination of both paradigms –called bottom-up meets top-down– has much more potential to tackle the analog design problem in its entirety than either optimization-based or generator-based approaches alone. Against this background, the thesis at hand presents Self-organized Wiring and Arrangement of Responsive Modules (SWARM), an interdisciplinary methodology addressing the design problem with a decentralized multi-agent system. Its basic principle, similar to the roundup of a sheep herd, is to let responsive mobile layout modules (implemented as context-aware procedural generators) interact with each other inside a user-defined layout zone. Each module is allowed to autonomously move, rotate and deform itself, while a supervising control organ successively tightens the layout zone to steer the interaction towards increasingly compact (and constraint compliant) layout arrangements. Considering various principles of self-organization and incorporating ideas from existing decentralized systems, SWARM is able to evoke the phenomenon of emergence: although each module only has a limited viewpoint and selfishly pursues its personal objectives, remarkable overall solutions can emerge on the global scale. Several examples exhibit this emergent behavior in SWARM, and it is particularly interesting that even optimal solutions can arise from the module interaction. Further examples demonstrate SWARM’s suitability for floorplanning purposes and its application to practical place-and-route problems. The latter illustrates how the interacting modules take care of their respective design requirements implicitly (i.e., bottom-up) while simultaneously paying respect to high level constraints (such as the layout outline imposed top-down by the supervising control organ). Experimental results show that SWARM can outperform optimization algorithms and procedural generators both in terms of layout quality and design productivity. From an academic point of view, SWARM’s grand achievement is to tap fertile virgin soil for future works on novel bottom-up meets top-down automatisms. These may one day be the key to close the automation gap in analog layout design.Nach mehr als drei Jahrzehnten Entwurfsautomatisierung werden die meisten Layouts für analoge integrierte Schaltkreise heute immer noch in aufwändiger Handarbeit entworfen. Gegenüber den hochautomatisierten Synthesewerkzeugen im Digitalbereich (die sich mit dem quantitativen Problem auseinandersetzen, mehr und mehr Komponenten auf einem einzelnen Chip unterzubringen – bestens bekannt als More Moore) kämpft die analoge Layoutautomatisierung mit den vielen verschiedenen und stark korrelierten funktionalen Anforderungen, die das analoge Entwurfsproblem zu einer More than Moore Herausforderung machen. Angesichts dieser qualitativen Komplexität bedarf es des umfassenden Expertenwissens erfahrener Layouter um sämtliche Entwurfsconstraints, die zwingend eingehalten werden müssen, zu berücksichtigen. Meist beinhaltet dies formal spezifiziertes als auch nicht-formal übermitteltes Expertenwissen, was eine explizite bzw. implizite Constraint Berücksichtigung nach sich zieht. Existierende Automatisierungsansätze können grundsätzlich unterteilt werden in Optimierungsalgorithmen (wo die Constraint Berücksichtigung explizit erfolgt) und prozedurale Generatoren (die Constraints nur implizit berücksichtigen können). Wie in dieser Arbeit eruiert wird, folgen diese beiden Automatisierungsstrategien zwei grundlegend unterschiedlichen Paradigmen, bezeichnet als top-down Automatisierung und bottom-up Automatisierung. Wesentliches Merkmal der top-down Automatisierung ist die Notwendigkeit einer umfassenden Problemformalisierung um eine eigenintelligente Lösungsfindung zu ermöglichen, während ein bottom-up Automatismus –parametergesteuert– lediglich Lösungen reproduziert, die vorab von einem Layoutexperten vorgedacht wurden. Da die Stärken des einen Paradigmas die Schwächen des anderen ausgleichen können, ist anzunehmen, dass eine Kombination beider Paradigmen –genannt bottom-up meets top down– weitaus mehr Potenzial hat, das analoge Entwurfsproblem in seiner Gesamtheit zu lösen als optimierungsbasierte oder generatorbasierte Ansätze für sich allein. Vor diesem Hintergrund stellt die vorliegende Arbeit Self-organized Wiring and Arrangement of Responsive Modules (SWARM) vor, eine interdisziplinäre Methodik, die das Entwurfsproblem mit einem dezentralisierten Multi-Agenten-System angeht. Das Grundprinzip besteht darin, ähnlich dem Zusammentreiben einer Schafherde, reaktionsfähige mobile Layoutmodule (realisiert als kontextbewusste prozedurale Generatoren) in einer benutzerdefinierten Layoutzone interagieren zu lassen. Jedes Modul darf sich selbständig bewegen, drehen und verformen, wobei ein übergeordnetes Kontrollorgan die Zone schrittweise verkleinert, um die Interaktion auf zunehmend kompakte (und constraintkonforme) Layoutanordnungen hinzulenken. Durch die Berücksichtigung diverser Selbstorganisationsgrundsätze und die Einarbeitung von Ideen bestehender dezentralisierter Systeme ist SWARM in der Lage, das Phänomen der Emergenz hervorzurufen: obwohl jedes Modul nur eine begrenzte Sichtweise hat und egoistisch seine eigenen Ziele verfolgt, können sich auf globaler Ebene bemerkenswerte Gesamtlösungen herausbilden. Mehrere Beispiele veranschaulichen dieses emergente Verhalten in SWARM, wobei besonders interessant ist, dass sogar optimale Lösungen aus der Modulinteraktion entstehen können. Weitere Beispiele demonstrieren SWARMs Eignung zwecks Floorplanning sowie die Anwendung auf praktische Place-and-Route Probleme. Letzteres verdeutlicht, wie die interagierenden Module ihre jeweiligen Entwurfsanforderungen implizit (also: bottom-up) beachten, während sie gleichzeitig High-Level-Constraints berücksichtigen (z.B. die Layoutkontur, die top-down vom übergeordneten Kontrollorgan auferlegt wird). Experimentelle Ergebnisse zeigen, dass Optimierungsalgorithmen und prozedurale Generatoren von SWARM sowohl bezüglich Layoutqualität als auch Entwurfsproduktivität übertroffen werden können. Aus akademischer Sicht besteht SWARMs große Errungenschaft in der Erschließung fruchtbaren Neulands für zukünftige Arbeiten an neuartigen bottom-up meets top-down Automatismen. Diese könnten eines Tages der Schlüssel sein, um die Automatisierungslücke im analogen Layoutentwurf zu schließen

    Holistic, data-driven, service and supply chain optimisation: linked optimisation.

    Get PDF
    The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business objectives. However, managing the decision-making process (like scheduling, ordering, delivering and allocating) at the various business components and maintaining a holistic objective is a huge business challenge, as these operations are complex and dynamic. This is because the overall chain of business processes is widely distributed across all the supply chain participants; therefore, no individual collaborator has a complete overview of the processes. Increasingly, such decisions are automated and are strongly supported by optimisation algorithms - manufacturing optimisation, B2B ordering, financial trading, transportation scheduling and allocation. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems like supply chains. It is well-known that decisions made at one point in supply chains can have significant consequences that ripple through linked production and transportation systems. Recently, global shocks to supply chains (COVID-19, climate change, blockage of the Suez Canal) have demonstrated the importance of these interdependencies, and the need to create supply chains that are more resilient and have significantly reduced impact on the environment. Such interacting decision-making systems need to be considered through an optimisation process. However, the interactions between such decision-making systems are not modelled. We therefore believe that modelling such interactions is an opportunity to provide computational extensions to current optimisation paradigms. This research study aims to develop a general framework for formulating and solving holistic, data-driven optimisation problems in service and supply chains. This research achieved this aim and contributes to scholarship by firstly considering the complexities of supply chain problems from a linked problem perspective. This leads to developing a formalism for characterising linked optimisation problems as a model for supply chains. Secondly, the research adopts a method for creating a linked optimisation problem benchmark by linking existing classical benchmark sets. This involves using a mix of classical optimisation problems, typically relating to supply chain decision problems, to describe different modes of linkages in linked optimisation problems. Thirdly, several techniques for linking supply chain fragmented data have been proposed in the literature to identify data relationships. Therefore, this thesis explores some of these techniques and combines them in specific ways to improve the data discovery process. Lastly, many state-of-the-art algorithms have been explored in the literature and these algorithms have been used to tackle problems relating to supply chain problems. This research therefore investigates the resilient state-of-the-art optimisation algorithms presented in the literature, and then designs suitable algorithmic approaches inspired by the existing algorithms and the nature of problem linkages to address different problem linkages in supply chains. Considering research findings and future perspectives, the study demonstrates the suitability of algorithms to different linked structures involving two sub-problems, which suggests further investigations on issues like the suitability of algorithms on more complex structures, benchmark methodologies, holistic goals and evaluation, processmining, game theory and dependency analysis

    Remote aerial data acquisition and capture project (RADAC)

    Get PDF
    The RADAC Project encompasses the design and prototype implementation of a system for low-cost aerial data sensor acquisition. It includes a Ground Transponder Unit (GTU), and Aerial Interrogation System (AIS) mounted under an aircraft. The GTU captures and transmits water-meter readings; the AIS initiates’ measurements and processes and displays the results. The proposed system is based on RF devices in association with a small low-cost single chip camera and microcontrollers. During a consultancy to a large Queensland government authority which has approximately 8000 water meters in regional and remote parts of the state. It was realised that considerable savings could be made in the management of water resources and human resources needed to read these at three month intervals. This project will calculate the RF subsystem performance in terms of gain, beamwidth, return loss, bandwidth, and matching of the antenna into the RF transceiver device Design Executive Data Acquisition System and Design and implement Interrogation Microcontrollerand Design PCB for RF Transceiver and Design and calculate power usage and Power Supply design and calculate High-Gain Helical Antenna. The solution based on using RF devices based on IEEE802.15.4 (IEEE 2006) in association with a small low cost single chip camera andmicrocontroller. Figure 1 shows the Block Diagram of the Ground Transponder Unit and symbolic AIS. The potential saving in maintenance costs to industry by remotely taking measurements is significant enough to warrant furtherinvestigation with industry. There is the potential for its adaption in other resource sector

    An Optimisation-based Framework for Complex Business Process: Healthcare Application

    Get PDF
    The Irish healthcare system is currently facing major pressures due to rising demand, caused by population growth, ageing and high expectations of service quality. This pressure on the Irish healthcare system creates a need for support from research institutions in dealing with decision areas such as resource allocation and performance measurement. While approaches such as modelling, simulation, multi-criteria decision analysis, performance management, and optimisation can – when applied skilfully – improve healthcare performance, they represent just one part of the solution. Accordingly, to achieve significant and sustainable performance, this research aims to develop a practical, yet effective, optimisation-based framework for managing complex processes in the healthcare domain. Through an extensive review of the literature on the aforementioned solution techniques, limitations of using each technique on its own are identified in order to define a practical integrated approach toward developing the proposed framework. During the framework validation phase, real-time strategies have to be optimised to solve Emergency Department performance issues in a major hospital. Results show a potential of significant reduction in patients average length of stay (i.e. 48% of average patient throughput time) whilst reducing the over-reliance on overstretched nursing resources, that resulted in an increase of staff utilisation between 7% and 10%. Given the high uncertainty in healthcare service demand, using the integrated framework allows decision makers to find optimal staff schedules that improve emergency department performance. The proposed optimum staff schedule reduces the average waiting time of patients by 57% and also contributes to reduce number of patients left without treatment to 8% instead of 17%. The developed framework has been implemented by the hospital partner with a high level of success

    Optimal sensor placement for sewer capacity risk management

    Get PDF
    2019 Spring.Includes bibliographical references.Complex linear assets, such as those found in transportation and utilities, are vital to economies, and in some cases, to public health. Wastewater collection systems in the United States are vital to both. Yet effective approaches to remediating failures in these systems remains an unresolved shortfall for system operators. This shortfall is evident in the estimated 850 billion gallons of untreated sewage that escapes combined sewer pipes each year (US EPA 2004a) and the estimated 40,000 sanitary sewer overflows and 400,000 backups of untreated sewage into basements (US EPA 2001). Failures in wastewater collection systems can be prevented if they can be detected in time to apply intervention strategies such as pipe maintenance, repair, or rehabilitation. This is the essence of a risk management process. The International Council on Systems Engineering recommends that risks be prioritized as a function of severity and occurrence and that criteria be established for acceptable and unacceptable risks (INCOSE 2007). A significant impediment to applying generally accepted risk models to wastewater collection systems is the difficulty of quantifying risk likelihoods. These difficulties stem from the size and complexity of the systems, the lack of data and statistics characterizing the distribution of risk, the high cost of evaluating even a small number of components, and the lack of methods to quantify risk. This research investigates new methods to assess risk likelihood of failure through a novel approach to placement of sensors in wastewater collection systems. The hypothesis is that iterative movement of water level sensors, directed by a specialized metaheuristic search technique, can improve the efficiency of discovering locations of unacceptable risk. An agent-based simulation is constructed to validate the performance of this technique along with testing its sensitivity to varying environments. The results demonstrated that a multi-phase search strategy, with a varying number of sensors deployed in each phase, could efficiently discover locations of unacceptable risk that could be managed via a perpetual monitoring, analysis, and remediation process. A number of promising well-defined future research opportunities also emerged from the performance of this research

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing
    • …
    corecore