8,792 research outputs found

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Advancing Carbon Sequestration through Smart Proxy Modeling: Leveraging Domain Expertise and Machine Learning for Efficient Reservoir Simulation

    Get PDF
    Geological carbon sequestration (GCS) offers a promising solution to effectively manage extra carbon, mitigating the impact of climate change. This doctoral research introduces a cutting-edge Smart Proxy Modeling-based framework, integrating artificial neural networks (ANNs) and domain expertise, to re-engineer and empower numerical reservoir simulation for efficient modeling of CO2 sequestration and demonstrate predictive conformance and replicative capabilities of smart proxy modeling. Creating well-performing proxy models requires extensive human intervention and trial-and-error processes. Additionally, a large training database is essential to ANN model for complex tasks such as deep saline aquifer CO2 sequestration since it is used as the neural network\u27s input and output data. One major limitation in CCS programs is the lack of real field data due to a lack of field applications and issues with confidentiality. Considering these drawbacks, and due to high-dimensional nonlinearity, heterogeneity, and coupling of multiple physical processes associated with numerical reservoir simulation, novel research to handle these complexities as it allows for the creation of possible CO2 sequestration scenarios that may be used as a training set. This study addresses several types of static and dynamic realistic and practical field-base data augmentation techniques ranging from spatial complexity, spatio-temporal complexity, and heterogeneity of reservoir characteristics. By incorporating domain-expertise-based feature generation, this framework honors precise representation of reservoir overcoming computational challenges associated with numerical reservoir tools. The developed ANN accurately replicated fluid flow behavior, resulting in significant computational savings compared to traditional numerical simulation models. The results showed that all the ML models achieved very good accuracies and high efficiency. The findings revealed that the quality of the path between the focal cell and injection wells emerged as the most crucial factor in both CO2 saturation and pressure estimation models. These insights significantly contribute to our understanding of CO2 plume monitoring, paving the way for breakthroughs in investigating reservoir behavior at a minimal computational cost. The study\u27s commitment to replicating numerical reservoir simulation results underscores the model\u27s potential to contribute valuable insights into the behavior and performance of CO2 sequestration systems, as a complimentary tool to numerical reservoir simulation when there is no measured data available from the field. The transformative nature of this research has vast implications for advancing carbon storage modeling technologies. By addressing the computational limitations of traditional numerical reservoir models and harnessing the synergy between machine learning and domain expertise, this work provides a practical workflow for efficient decision-making in sequestration projects

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Integrating multiple clusters for compute-intensive applications

    Get PDF
    Multicluster grids provide one promising solution to satisfying the growing computational demands of compute-intensive applications. However, it is challenging to seamlessly integrate all participating clusters in different domains into a single virtual computational platform. In order to fully utilize the capabilities of multicluster grids, computer scientists need to deal with the issue of joining together participating autonomic systems practically and efficiently to execute grid-enabled applications. Driven by several compute-intensive applications, this theses develops a multicluster grid management toolkit called Pelecanus to bridge the gap between user\u27s needs and the system\u27s heterogeneity. Application scientists will be able to conduct very large-scale execution across multiclusters with transparent QoS assurance. A novel model called DA-TC (Dynamic Assignment with Task Containers) is developed and is integrated into Pelecanus. This model uses the concept of a task container that allows one to decouple resource allocation from resource binding. It employs static load balancing for task container distribution and dynamic load balancing for task assignment. The slowest resources become useful rather than be bottlenecks in this manner. A cluster abstraction is implemented, which not only provides various cluster information for the DA-TC execution model, but also can be used as a standalone toolkit to monitor and evaluate the clusters\u27 functionality and performance. The performance of the proposed DA-TC model is evaluated both theoretically and experimentally. Results demonstrate the importance of reducing queuing time in decreasing the total turnaround time for an application. Experiments were conducted to understand the performance of various aspects of the DA-TC model. Experiments showed that our model could significantly reduce turnaround time and increase resource utilization for our targeted application scenarios. Four applications are implemented as case studies to determine the applicability of the DA-TC model. In each case the turnaround time is greatly reduced, which demonstrates that the DA-TC model is efficient for assisting application scientists in conducting their research. In addition, virtual resources were integrated into the DA-TC model for application execution. Experiments show that the execution model proposed in this thesis can work seamlessly with multiple hybrid grid/cloud resources to achieve reduced turnaround time

    AQUAGRID: an extensible platform for collaborative problem solving in groundwater protection

    Get PDF
    AQUAGRID is the subsurface hydrology computational service of the Sardinian GRIDA3 infrastructure, designed to deliver complex environmental applications via a user-friendly Web portal. The service aims to provide to water professionals integrated modeling tools to solve water resources management problems and aid decision making for contaminated soil and groundwater. In this paper, the AQUAGRID application concept and enabling technologies are illustrated. At the heart of the service are the computational models to simulate complex and large groundwater flow and contaminant transport problems and geochemical speciation. AQUAGRID is built on top of compute-Grid technologies by means of the EnginFrame Grid framework. Distributed data management is provided by the Storage Resource Broker data-Grid middleware. The resulting environment allows end-users to perform groundwater simulations and to visualize and interact with their results, using graphs, 3D images and annotated maps. The problem solving capability of the platform is demonstrated using the results of two case studies deployed

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    A novel learning automata game with local feedback for parallel optimization of hydropower production

    Get PDF
    Master's thesis Information- and communication technology IKT590 - University of Agder 2017Hydropower optimization for multi-reservoir systems is classi ed as a combinatorial optimization problem with large state-space that is particularly di cult to solve. There exist no golden standard when solving such problems, and many proposed algorithms are domain speci c. The literature describes several di erent techniques where linear programming approaches are extensively discussed, but tends to succumb to the curse of dimensionality problem when the state vector dimensions increase. This thesis introduces LA LCS, a novel learning automata algorithm that utilizes a parallel form of local feedback. This enables each individual automaton to receive direct feedback, resulting in faster convergence. In addition, the algorithm is implemented using a parallel architecture on a CUDA enabled GPU, along with exhaustive and random search. LA LCS has been veri ed through several scenarios. Experiments show that the algorithm is able to quickly adapt and nd optimal production strategies for problems of variable complexity. The algorithm is empirically veri ed and shown to hold great promise for solving optimization problems, including hydropower production strategies
    corecore