352 research outputs found

    Coarse Bifurcation Studies of Bubble Flow Microscopic Simulations

    Full text link
    The parametric behavior of regular periodic arrays of rising bubbles is investigated with the aid of 2-dimensional BGK Lattice-Boltzmann (LB) simulators. The Recursive Projection Method is implemented and coupled to the LB simulators, accelerating their convergence towards what we term coarse steady states. Efficient stability/bifurcation analysis is performed by computing the leading eigenvalues/eigenvectors of the coarse time stepper. Our approach constitutes the basis for system-level analysis of processes modeled through microscopic simulations.Comment: 4 pages, 3 figure

    Data-driven model reduction-based nonlinear MPC for large-scale distributed parameter systems

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this recordModel predictive control (MPC) has been effectively applied in process industries since the 1990s. Models in the form of closed equation sets are normally needed for MPC, but it is often difficult to obtain such formulations for large nonlinear systems. To extend nonlinear MPC (NMPC) application to nonlinear distributed parameter systems (DPS) with unknown dynamics, a data-driven model reduction-based approach is followed. The proper orthogonal decomposition (POD) method is first applied off-line to compute a set of basis functions. Then a series of artificial neural networks (ANNs) are trained to effectively compute POD time coefficients. NMPC, using sequential quadratic programming is then applied. The novelty of our methodology lies in the application of POD's highly efficient linear decomposition for the consequent conversion of any distributed multi-dimensional space-state model to a reduced 1-dimensional model, dependent only on time, which can be handled effectively as a black-box through ANNs. Hence we construct a paradigm, which allows the application of NMPC to complex nonlinear high-dimensional systems, even input/output systems, handled by black-box solvers, with significant computational efficiency. This paradigm combines elements of gain scheduling, NMPC, model reduction and ANN for effective control of nonlinear DPS. The stabilization/destabilization of a tubular reactor with recycle is used as an illustrative example to demonstrate the efficiency of our methodology. Case studies with inequality constraints are also presented.The authors would like to acknowledge the financial support of the EC FP6 Project: CONNECT [COOP-2006-31638] and the EC FP7 project CAFE [KBBE-212754]

    Efficient Comparison of Massive Graphs Through The Use Of 'Graph Fingerprints'

    Get PDF
    The problem of how to compare empirical graphs is an area of great interest within the field of network science. The ability to accurately but efficiently compare graphs has a significant impact in such areas as temporal graph evolution, anomaly detection and protein comparison. The comparison problem is compounded when working with graphs containing millions of anonymous, i.e. unlabelled, vertices and edges. Comparison of two or more graphs is highly computationally expensive. Thus reducing a graph to a much smaller feature set – called a fingerprint, which accurately captures the essence of the graph would be highly desirable. Such an approach would have potential applications outside of graph comparisons, especially in the area of machine learning. This paper introduces a feature extraction based approach for the efficient comparison of large topologically similar, but order varying, unlabelled graph datasets. The approach acts by producing a ‘Graph Fingerprint’ which represents both vertex level and global level topological features from a graph. The approach is shown to be efficient when comparing graphs which are highly topologically similar but order varying. The approach scales linearly with the size and complexity of the graphs being fingerprinted

    Towards an Info-Symbiotic Decision Support System for Disaster Risk Management

    Get PDF
    This paper outlines a framework for an info-symbiotic modelling system using cyber-physical sensors to assist in decision-making. Using a dynamic data-driven simulation approach, this system can help with the identification of target areas and resource allocation in emergency situations. Using different natural disasters as exemplars, we will show how cyber-physical sensors can enhance ground level intelligence and aid in the creation of dynamic models to capture the state of human casualties. Using a virtual command & control centre communicating with sensors in the field, up-to-date information of the ground realities can be incorporated in a dynamic feedback loop. Using other information (e.g. Weather models) a complex and rich model can be created. The framework adaptively manages the heterogeneous collection of data resources and uses agent-based models to create what-if scenarios in order to determine the best course of action

    Data Quality Assessment and Anomaly Detection Via Map / Reduce and Linked Data: A Case Study in the Medical Domain

    Get PDF
    Recent technological advances in modern healthcare have lead to the ability to collect a vast wealth of patient monitoring data. This data can be utilised for patient diagnosis but it also holds the potential for use within medical research. However, these datasets often contain errors which limit their value to medical research, with one study finding error rates ranging from 2.3%???26.9% in a selection of medical databases. Previous methods for automatically assessing data quality normally rely on threshold rules, which are often unable to correctly identify errors, as further complex domain knowledge is required. To combat this, a semantic web based framework has previously been developed to assess the quality of medical data. However, early work, based solely on traditional semantic web technologies, revealed they are either unable or inefficient at scaling to the vast volumes of medical data. In this paper we present a new method for storing and querying medical RDF datasets using Hadoop Map / Reduce. This approach exploits the inherent parallelism found within RDF datasets and queries, allowing us to scale with both dataset and system size. Unlike previous solutions, this framework uses highly optimised (SPARQL) joining strategies, intelligent data caching and the use of a super-query to enable the completion of eight distinct SPARQL lookups, comprising over eighty distinct joins, in only two Map / Reduce iterations. Results are presented comparing both the Jena and a previous Hadoop implementation demonstrating the superior performance of the new methodology. The new method is shown to be five times faster than Jena and twice as fast as the previous approach

    dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter

    Get PDF
    Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft

    Deep learning for diabetic retinopathy detection and classification based on fundus images: A review.

    Get PDF
    Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions

    Sampling with poling-based flux balance analysis: optimal versus sub-optimal flux space analysis of Actinobacillus succinogenes

    Get PDF
    Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively ¿small¿ characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO2 must be taken into the system. Solutions involving release of CO2 all give sub-optimal succinic acid production

    A software-defined architecture and prototype for disaggregated memory rack scale systems

    Get PDF
    Disaggregation and rack-scale systems have the potential of drastically increasing TCO and utilization of cloud datacenters, while maintaining performance. In this paper, we present a novel rack-scale system architecture featuring software-defined remote memory disaggregation. Our hardware design and operating system extensions enable unmodified applications to dynamically attach to memory segments residing on physically remote memory pools and use such remote segments in a byte-addressable manner, as if they were local to the application. Our system features also a control plane that automates software-defined dynamic matching of compute to memory resources, as driven by datacenter workload needs. We prototyped our system on the commercially available Zynq Ultrascale+ MPSoC platform. To our knowledge, this is the first time a software-defined disaggregated system has been prototyped on commercial hardware and evaluated through industry standard software benchmarks. Our initial results - using benchmarks that are artificially highly adversarial in terms of memory bandwidth - show that disaggregated memory access exhibits a round-trip latency of only 134 clock cycles; and a throughput penalty of as low as 55%, relative to locally-attached memory. We also discuss estimations as to how our findings may translate to applications with pragmatically milder memory aggressiveness levels, as well as innovation avenues across the stack opened up by our work
    • …
    corecore