4,881 research outputs found
Research and Education in Computational Science and Engineering
Over the past two decades the field of computational science and engineering
(CSE) has penetrated both basic and applied research in academia, industry, and
laboratories to advance discovery, optimize systems, support decision-makers,
and educate the scientific and engineering workforce. Informed by centuries of
theory and experiment, CSE performs computational experiments to answer
questions that neither theory nor experiment alone is equipped to answer. CSE
provides scientists and engineers of all persuasions with algorithmic
inventions and software systems that transcend disciplines and scales. Carried
on a wave of digital technology, CSE brings the power of parallelism to bear on
troves of data. Mathematics-based advanced computing has become a prevalent
means of discovery and innovation in essentially all areas of science,
engineering, technology, and society; and the CSE community is at the core of
this transformation. However, a combination of disruptive
developments---including the architectural complexity of extreme-scale
computing, the data revolution that engulfs the planet, and the specialization
required to follow the applications to new frontiers---is redefining the scope
and reach of the CSE endeavor. This report describes the rapid expansion of CSE
and the challenges to sustaining its bold advances. The report also presents
strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie
Continuous maintenance and the future – Foundations and technological challenges
High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle ‘big data’ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security
Accelerating Manufacturing Decisions using Bayesian Optimization: An Optimization and Prediction Perspective
Manufacturing is a promising technique for producing complex and custom-made parts with a high degree of precision. It can also provide us with desired materials and products with specified properties. To achieve that, it is crucial to find out the optimum point of process parameters that have a significant impact on the properties and quality of the final product. Unfortunately, optimizing these parameters can be challenging due to the complex and nonlinear nature of the underlying process, which becomes more complicated when there are conflicting objectives, sometimes with multiple goals. Furthermore, experiments are usually costly, time-consuming, and require expensive materials, man, and machine hours. So, each experiment is valuable and it\u27s critical to determine the optimal experiment location to gain the most comprehensive understanding of the process. Sequential learning is a promising approach to actively learn from the ongoing experiments, iteratively update the underlying optimization routine, and adapt the data collection process on the go. This thesis presents a multi-objective Bayesian optimization framework to find out the optimum processing conditions for a manufacturing setup. It uses an acquisition function to collect data points sequentially and iteratively update its understanding of the underlying design space utilizing a Gaussian Process-based surrogate model.
In manufacturing processes, the focus is often on obtaining a rough understanding of the design space using minimal experimentation, rather than finding the optimal parameters. This falls under the category of approximating the underlying function rather than design optimization. This approach can provide material scientists or manufacturing engineers with a comprehensive view of the entire design space, increasing the likelihood of making discoveries or making robust decisions. However, a precise and reliable prediction model is necessary for a good approximation. To meet this requirement, this thesis proposes an epsilon-greedy sequential prediction framework that is distinct from the optimization framework. The data acquisition strategy has been refined to balance exploration and exploitation, and a threshold has been established to determine when to switch between the two. The performance of this proposed optimization and prediction framework is evaluated using real-life datasets against the traditional design of experiments. The proposed frameworks have generated effective optimization and prediction results using fewer experiments
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
ytopt: Autotuning Scientific Applications for Energy Efficiency at Large Scales
As we enter the exascale computing era, efficiently utilizing power and
optimizing the performance of scientific applications under power and energy
constraints has become critical and challenging. We propose a low-overhead
autotuning framework to autotune performance and energy for various hybrid
MPI/OpenMP scientific applications at large scales and to explore the tradeoffs
between application runtime and power/energy for energy efficient application
execution, then use this framework to autotune four ECP proxy applications --
XSBench, AMG, SWFFT, and SW4lite. Our approach uses Bayesian optimization with
a Random Forest surrogate model to effectively search parameter spaces with up
to 6 million different configurations on two large-scale production systems,
Theta at Argonne National Laboratory and Summit at Oak Ridge National
Laboratory. The experimental results show that our autotuning framework at
large scales has low overhead and achieves good scalability. Using the proposed
autotuning framework to identify the best configurations, we achieve up to
91.59% performance improvement, up to 21.2% energy savings, and up to 37.84%
EDP improvement on up to 4,096 nodes
- …