35 research outputs found

    Kinetic Monte Carlo simulations for heterogeneous catalysis: Fundamentals, current status and challenges

    Get PDF
    Kinetic Monte Carlo (KMC) simulations in combination with first-principles-based calculations are rapidly becoming the gold-standard computational framework for bridging the gap between the wide range of length and time-scales over which heterogeneous catalysis unfolds. First-principles KMC (1p-KMC) simulations provide accurate insights into reactions over surfaces, a vital step towards the rational design of novel catalysts. In this perspective article, we briefly outline basic principles, computational challenges, successful applications, as well as future directions and opportunities of this promising and ever more popular kinetic modeling approach

    Random Finite Set Based Data Assimilation for Dynamic Data Driven Simulation of Maritime Pirate Activity

    Get PDF
    Maritime piracy is posing a genuine threat to maritime transport. The main purpose of simulation is to predict the behaviors of many actual systems, and it has been successfully applied in many fields. But the application of simulation in the maritime domain is still scarce. The rapid development of network and measurement technologies brings about higher accuracy and better availability of online measurements. This makes the simulation paradigm named as dynamic data driven simulation increasingly popular. It can assimilate the online measurements into the running simulation models and ensure much more accurate prediction of the complex systems under study. In this paper, we study how to utilize the online measurements in the agent based simulation of the maritime pirate activity. A new random finite set based data assimilation algorithm is proposed to overcome the limitations of the conventional vectors based data assimilation algorithms. The random finite set based general data model, measurement model, and simulation model are introduced to support the proposed algorithm. The details of the proposed algorithm are presented in the context of agent based simulation of maritime pirate activity. Two groups of experiments are used to practically prove the effectiveness and superiority of the proposed algorithm

    Mobility-awareness in complex event processing systems

    Get PDF
    The proliferation and vast deployment of mobile devices and sensors over the last couple of years enables a huge number of Mobile Situation Awareness (MSA) applications. These applications need to react in near real-time to situations in the environment of mobile objects like vehicles, pedestrians, or cargo. To this end, Complex Event Processing (CEP) is becoming increasingly important as it allows to scalably detect situations “on-the-fly” by continously processing distributed sensor data streams. Furthermore, recent trends in communication networks promise high real-time conformance to CEP systems by processing sensor data streams on distributed computing resources at the edge of the network, where low network latencies can be achieved. Yet, supporting MSA applications with a CEP middleware that utilizes distributed computing resources proves to be challenging due to the dynamics of mobile devices and sensors. In particular, situations need to be efficiently, scalably, and consistently detected with respect to ever-changing sensors in the environment of a mobile object. Moreover, the computing resources that provide low latencies change with the access points of mobile devices and sensors. The goal of this thesis is to provide concepts and algorithms to i) continuously detect situations that recently occurred close to a mobile object, ii) support bandwidth and computational efficient detections of such situations on distributed computing resources, and iii) support consistent, low latency, and high quality detections of such situations. To this end, we introduce the distributed Mobile CEP (MCEP) system which automatically adapts the processing of sensor data streams according to a mobile object’s location. MCEP provides an expressive, location-aware query model for situations that recently occurred at a location close to a mobile object. MCEP significantly reduces latency, bandwidth, and processing overhead by providing on-demand and opportunistic adaptation algorithms to dynamically assign event streams to queries of the MCEP system. Moreover, MCEP incorporates algorithms to adapt the deployment of MCEP queries in a network of computing resources. This way, MCEP supports latency-sensitive, large-scale deployments of MSA applications and ensures a low network utilization while mobile objects change their access points to the system. MCEP also provides methods to increase the scalability in terms of deployed MCEP queries by reusing event streams and computations for detecting common situations for several mobile objects

    Task-based Runtime Optimizations Towards High Performance Computing Applications

    Get PDF
    The last decades have witnessed a rapid improvement of computational capabilities in high-performance computing (HPC) platforms thanks to hardware technology scaling. HPC architectures benefit from mainstream advances on the hardware with many-core systems, deep hierarchical memory subsystem, non-uniform memory access, and an ever-increasing gap between computational power and memory bandwidth. This has necessitated continuous adaptations across the software stack to maintain high hardware utilization. In this HPC landscape of potentially million-way parallelism, task-based programming models associated with dynamic runtime systems are becoming more popular, which fosters developers’ productivity at extreme scale by abstracting the underlying hardware complexity. In this context, this dissertation highlights how a software bundle powered by a task-based programming model can address the heterogeneous workloads engendered by HPC applications., i.e., data redistribution, geospatial modeling and 3D unstructured mesh deformation here. Data redistribution aims to reshuffle data to optimize some objective for an algorithm, whose objective can be multi-dimensional, such as improving computational load balance or decreasing communication volume or cost, with the ultimate goal of increasing the efficiency and therefore reducing the time-to-solution for the algorithm. Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Meshing the deformable contour of moving 3D bodies is an expensive operation that can cause huge computational challenges in fluid-structure interaction (FSI) applications. Therefore, in this dissertation, Redistribute-PaRSEC, ExaGeoStat-PaRSEC and HiCMA-PaRSEC are proposed to efficiently tackle these HPC applications respectively at extreme scale, and they are evaluated on multiple HPC clusters, including AMD-based, Intel-based, Arm-based CPU systems and IBM-based multi-GPU system. This multidisciplinary work emphasizes the need for runtime systems to go beyond their primary responsibility of task scheduling on massively parallel hardware system for servicing the next-generation scientific applications

    A New Paradigm for Proactive Self-Healing in Future Self-Organizing Mobile Cellular Networks

    Get PDF
    Mobile cellular network operators spend nearly a quarter of their revenue on network management and maintenance. Remarkably, a significant proportion of that budget is spent on resolving outages that degrade or disrupt cellular services. Historically, operators have mainly relied on human expertise to identify, diagnose and resolve such outages while also compensating for them in the short-term. However, with ambitious quality of experience expectations from 5th generation and beyond mobile cellular networks spurring research towards technologies such as ultra-dense heterogeneous networks and millimeter wave spectrum utilization, discovering and compensating coverage lapses in future networks will be a major challenge. Numerous studies have explored heuristic, analytical and machine learning-based solutions to autonomously detect, diagnose and compensate cell outages in legacy mobile cellular networks, a branch of research known as self-healing. This dissertation focuses on self-healing techniques for future mobile cellular networks, with special focus on outage detection and avoidance components of self-healing. Network outages can be classified into two primary types: 1) full and 2) partial. Full outages result from failed soft or hard components of network entities while partial outages are generally a consequence of parametric misconfiguration. To this end, chapter 2 of this dissertation is dedicated to a detailed survey of research on detecting, diagnosing and compensating full outages as well as a detailed analysis of studies on proactive outage avoidance schemes and their challenges. A key observation from the analysis of the state-of-the-art outage detection techniques is their dependence on full network coverage data, susceptibility to noise or randomness in the data and inability to characterize outages in both spacial domain and temporal domain. To overcome these limitations, chapters 3 and 4 present two unique and novel outage detection techniques. Chapter 3 presents an outage detection technique based on entropy field decomposition which combines information field theory and entropy spectrum pathways theory and is robust to noise variance. Chapter 4 presents a deep learning neural network algorithm which is robust to data sparsity and compares it with entropy field decomposition and other state-of-the-art machine learning-based outage detection algorithms including support vector machines, K-means clustering, independent component analysis and deep auto-encoders. Based on the insights obtained regarding the impact of partial outages, chapter 5 presents a complete framework for 5th generation and beyond mobile cellular networks that is designed to avoid partial outages caused by parametric misconfiguration. The power of the proposed framework is demonstrated by leveraging it to design a solution that tackles one of the most common problems associated with ultra-dense heterogeneous networks, namely imbalanced load among small and macro cells, and poor resource utilization as a consequence. The optimization problem is formulated as a function of two hard parameters namely antenna tilt and transmit power, and a soft parameter, cell individual offset, that affect the coverage, capacity and load directly. The resulting solution is a combination of the otherwise conflicting coverage and capacity optimization and load balancing self-organizing network functions

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Geothermal Energy: Delivering on the Global Potential

    Get PDF
    After decades of being largely the preserve of countries in volcanic regions, the use of geothermal energy—for both heat and power applications—is now expanding worldwide. This reflects its excellent low-carbon credentials and its ability to offer baseload and dispatchable output - rare amongst the mainstream renewables. Yet uptake of geothermal still lags behind that of solar and wind, principally because of (i) uncertainties over resource availability in poorly-explored reservoirs and (ii) the concentration of full-lifetime costs into early-stage capital expenditure (capex). Recent advances in reservoir characterization techniques are beginning to narrow the bounds of exploration uncertainty, both by improving estimates of reservoir geometry and properties, and by providing pre-drilling estimates of temperature at depth. Advances in drilling technologies and management have potential to significantly lower initial capex, while operating expenditure is being further reduced by more effective reservoir management — supported by robust mathematical models — and increasingly efficient energy conversion systems (flash, binary and combined-heat-and-power). Advances in characterization and modelling are also improving management of shallow low-enthalpy resources that can only be exploited using heat-pump technology. Taken together with increased public appreciation of the benefits of geothermal, the technology is finally ready to take its place as a mainstream renewable technology, This book draws together some of the latest developments in concepts and technology that are enabling the growing realisation of the global potential of geothermal energy in all its manifestations.After decades of being largely the preserve of countries in volcanic regions, the use of geothermal energy—for both heat and power applications—is now expanding worldwide. This reflects its excellent low-carbon credentials and its ability to offer baseload and dispatchable output - rare amongst the mainstream renewables. Yet uptake of geothermal still lags behind that of solar and wind, principally because of (i) uncertainties over resource availability in poorly-explored reservoirs and (ii) the concentration of full-lifetime costs into early-stage capital expenditure (capex). Recent advances in reservoir characterization techniques are beginning to narrow the bounds of exploration uncertainty, both by improving estimates of reservoir geometry and properties, and by providing pre-drilling estimates of temperature at depth. Advances in drilling technologies and management have potential to significantly lower initial capex, while operating expenditure is being further reduced by more effective reservoir management — supported by robust mathematical models — and increasingly efficient energy conversion systems (flash, binary and combined-heat-and-power). Advances in characterization and modelling are also improving management of shallow low-enthalpy resources that can only be exploited using heat-pump technology. Taken together with increased public appreciation of the benefits of geothermal, the technology is finally ready to take its place as a mainstream renewable technology

    Attribute-Level Versioning: A Relational Mechanism for Version Storage and Retrieval

    Get PDF
    Data analysts today have at their disposal a seemingly endless supply of data and repositories hence, datasets from which to draw. New datasets become available daily thus making the choice of which dataset to use difficult. Furthermore, traditional data analysis has been conducted using structured data repositories such as relational database management systems (RDBMS). These systems, by their nature and design, prohibit duplication for indexed collections forcing analysts to choose one value for each of the available attributes for an item in the collection. Often analysts discover two or more datasets with information about the same entity. When combining this data and transforming it into a form that is usable in an RDBMS, analysts are forced to deconflict the collisions and choose a single value for each duplicated attribute containing differing values. This deconfliction is the source of a considerable amount of guesswork and speculation on the part of the analyst in the absence of professional intuition. One must consider what is lost by discarding those alternative values. Are there relationships between the conflicting datasets that have meaning? Is each dataset presenting a different and valid view of the entity or are the alternate values erroneous? If so, which values are erroneous? Is there a historical significance of the variances? The analysis of modern datasets requires the use of specialized algorithms and storage and retrieval mechanisms to identify, deconflict, and assimilate variances of attributes for each entity encountered. These variances, or versions of attribute values, contribute meaning to the evolution and analysis of the entity and its relationship to other entities. A new, distinct storage and retrieval mechanism will enable analysts to efficiently store, analyze, and retrieve the attribute versions without unnecessary complexity or additional alterations of the original or derived dataset schemas. This paper presents technologies and innovations that assist data analysts in discovering meaning within their data and preserving all of the original data for every entity in the RDBMS
    corecore