11 research outputs found

    Enabling industrial scale simulation / emulation models

    Full text link
    OLE Process Control (OPC) is an industry standard that facilitates the communication between PCs and Programmable Logic Controllers (PLC). This communication allows for the testing of control systems with an emulation model. When models require faster and higher volume communications, limitations within OPC prevent this. In this paper an interface is developed to allow high speed and high volume communications between a PC and PLC enabling the emulation of larger and more complex control systems and their models. By switching control of elements within the model between the model engine and the control system it is possible to use the model to validate the system design, test the real world control systems and visualise real world operation. <br /

    Automated Syringe Filler

    Get PDF
    The automated syringe filling system is a bench-top device designed to remove human error when filling a syringe. The microcontroller powered system will hold multiple medicine types that will fill a single syringe with a user specified amount of one medicine to a precise degree and log this information against a remote database of patient information. Before the system can be accessed, the user’s credentials will be checked against the database of authorized users. Once verified a touch screen will be used to enter user input that will be logged to a remote database. The microcontroller will then control the actuators and servo to fill the syringe with the appropriate amount of medicine. The system will automate the process of filling a syringe and remove the element of human error from the syringe filling equation

    Modular Optimizer for Mixed Integer Programming MOMIP Version 1.1

    Get PDF
    This Working Paper documents the Modular Optimizer for Mixed Integer Programming (MOMIP). MOMIP is an optimization solver for middle-size mixed integer programming problems, based on a modified branch-and-bound algorithm. It is designed as part of a wider linear programming modular library being developed within the IIASA CSA project on "Methodology and Techniques of Decision Analysis". The library is a collection of independent modules, implemented as C++ classes, providing all the necessary functions of data input, data transfer, problem solution, and results output. The Input/Output module provides data structure to store a problem and its solution in a standardized form as well as standard input and output functions. All the solver modules take the problem data from the Input/Output module and return the solutions to this module. Thus, for straightforward use, one can configure a simple optimization system using only the Input/Output module and an appropriate solver module. More complex analysis may require use of more than one solver module. Moreover, for complex analysis of real-life problems, it may be more convenient to incorporate the library modules into an application program. This will allow the user to proceed with direct feeding of the problem data generated in the program and direct withdrawal results for further analysis. The paper provides the complete description of the MOMIP module. Methodological background allows the user to understand the implemented algorithm and efficient use of its control parameters for various analyses. The module description provides all the information necessary to make MOMIP operational. It is additionally illustrated with a tutorial example and a sample program. Modeling recommendations are also provided, explaining how to built mixed integer models in order to speedup the solution process. These may be interesting, not only for the MOMIP users, but also for users of any mixed integer programming software

    Optimizing FPGA-based CNN accelerator for energy efficiency with an extended Rooine model

    Get PDF
    In recent years, the convolutional neural network (CNN) has found wide acceptance in solving practical computer vision and image recognition problems. Also recently, due to its exibility, faster development time, and energy efficiency, the field-programmable gate array (FPGA) has become an attractive solution to exploit the inherent parallelism in the feedforward process of the CNN. However, to meet the demands for high accuracy of today's practical recognition applications that typically have massive datasets, the sizes of CNNs have to be larger and deeper. Enlargement of the CNN aggravates the problem of off-chip memory bottleneck in the FPGA platform since there is not enough space to save large datasets on-chip. In this work, we propose a memory system architecture that best matches the off-chip memory traffic with the optimum throughput of the computation engine, while it operates at the maximum allowable frequency. With the help of an extended version of the Rooine model proposed in this work, we can estimate memory bandwidth utilization of the system at different operating frequencies since the proposed model considers operating frequency in addition to bandwidth utilization and throughput. In order to find the optimal solution that has the best energy efficiency, we make a trade-off between energy efficiency and computational throughput. This solution saves 18% of energy utilization with the trade-off having less than 2% reduction in throughput performance. We also propose to use a race-to-halt strategy to further improve the energy efficiency of the designed CNN accelerator. Experimental results show that our CNN accelerator can achieve a peak performance of 52.11 GFLOPS and energy efficiency of 10.02 GFLOPS/W on a ZYNQ ZC706 FPGA board running at 250 MHz, which outperforms most previous approaches

    Analyse statique pour l'optimisation des mises à jour de documents XML temporels

    Get PDF
    Ces dernières années ont été marquées par l adoption en masse de XML comme format d échange et de représentation des données stockées sur le web. Cette évolution s est accompagnée du développement de langages pour l interrogation et la manipulation des données XML et de la mise en œuvre de plusieurs systèmes pour le stockage et le traitement des ces dernières. Parmi ces systèmes, les moteurs mémoire centrale ont été développés pour faire face à des besoins spécifiques d applications qui ne nécessitant pas les fonctionnalités avancées des SGBD traditionnels. Ces moteurs offrent les mêmes fonctionnalités que les systèmes traditionnels sauf que contrairement à ces derniers, ils nécessitent de charger entièrement les documents en mémoire centrale pour pouvoir les traiter. Par conséquent, ces systèmes sont limités quant à la taille des documents pouvant être traités. Dans cette thèse nous nous intéressons aux aspects liés à l évolution des données XML et à la gestion de la dimension temporelle de celles-ci. Cette thèse comprend deux parties ayant comme objectif commun le développement de méthodes efficaces pour le traitement des documents XML volumineux en utilisant les moteurs mémoire centrale. Dans la première partie nous nous focalisons sur la mise à jour des documents XML statiques. Nous proposons une technique d optimisation basée sur la projection XML et sur l utilisation des schémas. La projection est une méthode qui a été proposée dans le cadre des requêtes afin de résoudre les limitations des moteurs mémoire centrale. Son utilisation pour le cas des mises à jour soulève de nouveaux problèmes liés notamment à la propagation des effets des mises à jour. La deuxième partie est consacrée à la construction et à la maintenance des documents temporels, toujours sous la contrainte d espace. A cette contrainte s ajoute la nécessité de générer des documents efficaces du point de vue du stockage. Notre contribution consiste en deux méthodes. La première méthode s applique dans le cas général pour lequel aucune information n est utilisée pour la construction des documents temporels. Cette méthode est conçue pour être réalisée en streaming et permet ainsi le traitement de document quasiment sans limite de taille. La deuxième méthode s applique dans le cas où les changements sont spécifiés par des mises à jour. Elle utilise le paradigme de projection ce qui lui permet en outre de manipuler des documents volumineux de générer des documents temporels satisfaisant du point de vue du stockage.The last decade has witnessed a rapid expansion of XML as a format for representing and exchanging data through the web. In order to follow this evolution, many languages have been proposed to query, update or transform XML documents. At the same time, a range set of systems allowing to store and process XML documents have been developed. Among these systems, main-memory engines are lightweight systems that are the favored choice for applications that do not require complex functionalities of traditional DBMS such as transaction management and secondary storage indexes. These engines require to loading the documents to be processed entirely into main-memory. Consequently, they suffer from space limitations and are not able to process quite large documents. In this thesis, we investigate issues related to the evolution of XML documents and to the management of the temporal dimension for XML. This thesis consists of two parts sharing the common goal of developing efficient techniques for processing large XML documents using main-memory engines. The first part investigates the optimization of update for static XML documents. We have developed a technique based on XML projection, a method that has been proposed to overcome the limitations of main-memory engines in the case of querying. We have devised for a new scenario for projection allowing the propagation of the updates effects. The second of the thesis investigates building and maintaining time-stamped XML documents under space limitations. Our contribution consists in two methods. The first method can be applied in the general case where no restriction is made on the evolution of the XML documents. This method is designed to be performed in streaming and allows thus processing large documents. The second method deals with the case where the changes are specified by updates. It is based on the projection paradigm which it allows it for processing large documents and for generating time-stamped documents satisfactory from the point of view of storage. We provide a means to comparing time-stamped wrt space occupancy.PARIS11-SCD-Bib. électronique (914719901) / SudocSudocFranceF

    Democratic Banner November 23, 1852

    Get PDF
    The Democratic Banner was a newspaper published weekly in Mount Vernon, Ohio. It later became the Mt Vernon Democratic Banner in December of 1853.https://digital.kenyon.edu/banner1852/1022/thumbnail.jp

    Heterologous expression and secretion of nanobodies targeting Campylobacter jejuni for intestinal health applications

    Get PDF
    As strategies for engineering the enteric microflora continue to advance, the vision of a future with a secondary, artificial immune system increasingly comes into focus. There are numerous challenges that will need to be overcome before this is a reality including the construction of a chassis capable of producing functional antimicrobial compounds at adequate concentrations and the construction of libraries of antimicrobial compounds capable of targeting a range of pathogens including those that develop resistance. Towards these ends, I have elected to engineer Bacteroides thetaiotaomicron (B. theta), one of the most prevalent and stable organisms in the human distal gut, to heterologously express and secrete nanobodies that bind the flagella of Campylobacter jejuni. Nanobody genes were inserted behind native B. theta promoters and integrated into the genome allowing for induction following the introduction of a specific inducing compound. Signalling peptides were fused to the nanobodies allowing for targeting the nanobodies out of Escherichia coli (E. coli) cells. Also as part of this project, novel signal peptides have also been characterized allowing for the targeting of protein to any subcellular compartment within a gram-negative bacterium

    Evolutionary Algorithms and Simulation for Intelligent Autonomous Vehicles in Container Terminals

    Get PDF
    The study of applying soft computing techniques, such as evolutionary computation and simulation, to the deployment of intelligent autonomous vehicles (IAVs) in container terminals is the focus of this thesis. IAVs are a new type of intelligent vehicles designed for transportation of containers in container terminals. This thesis for the first time investigates how IAVs can be effectively accommodated in container terminals and how much the performance of container terminals can be improved when IAVs are being used. In an attempt to answer the above research questions, the thesis makes the following contributions: First, the thesis studies the fleet sizing problem in container terminals, an important design problem in container terminals. The contributions include proposing a novel evolutionary algorithm (with superior results to the state-of-the-art CPLEX solver), combining the proposed evolutionary algorithm with Monte Carlo simulation to take into account uncertainties, validating results of the uncertain case with a high fidelity simulation, proposing different robustness measures, comparing different robust solutions and proposing a dynamic sampling technique to improve the performance of the proposed evolutionary algorithm. Second, the thesis studies the impact of IAVs on container terminals’ performance and total cost, which are very important criteria in port equipment. The contributions include developing simulation models using realistic data (it is for the first time that the impact of IAVs on containers terminals is investigated using simulation models) and applying a cost model to the results of the simulation to estimate and compare the total cost of the case study with IAVs against existing trucks. Third, the thesis proposes a new framework for the simulations of container terminals. The contributions include developing a flexible simulation framework, providing a user library for users to create 3D simulation models using drag-and-drop features, and allowing users to easily incorporate their optimisation algorithms into their simulations

    Notes of a military reconnoissance, from Fort Leavenworth, in Missouri, to San Diego, in California, including part of the Arkansas, Del Norte, and Gila Rivers.

    Get PDF
    Report on an Expedition to New Mexico. 10 Feb. SED 23, 30-1, v4, 132p. [506] or HED 41,30-1, v4, 614p. [517] Encounters with Indian tribes of the Southwest; maps and sketches. (House version includes an expedition from Missouri to California
    corecore