24 research outputs found

    Supervised and Penalized Baseline Correction

    Full text link
    Spectroscopic measurements can show distorted spectral shapes arising from a mixture of absorbing and scattering contributions. These distortions (or baselines) often manifest themselves as non-constant offsets or low-frequency oscillations. As a result, these baselines can adversely affect analytical and quantitative results. Baseline correction is an umbrella term where one applies pre-processing methods to obtain baseline spectra (the unwanted distortions) and then remove the distortions by differencing. However, current state-of-the art baseline correction methods do not utilize analyte concentrations even if they are available, or even if they contribute significantly to the observed spectral variability. We examine a class of state-of-the-art methods (penalized baseline correction) and modify them such that they can accommodate a priori analyte concentrations such that prediction can be enhanced. Performance will be assessed on two near infra-red data sets across both classical penalized baseline correction methods (without analyte information) and modified penalized baseline correction methods (leveraging analyte information).Comment: 27 pages; 9 figures; 2 tables; fixed typos; additional sanity checks for grammar and syntax; streamlined text and made minor cosmetic change

    Computational homogenisation of phase-field fracture

    Get PDF
    In this manuscript, the computational homogenisation of phase-field fractures is addressed. To this end, a variationally consistent two-scale phase-field fracture framework is developed, which formulates the coupled momentum balance and phase-field evolution equations at the macro-scale as well as at the Representative Volume Element (RVE) scale. The phase-field variable represent fractures at the RVE scale, however, at the macro-scale, it is treated as an auxiliary variable. The latter interpretation follows from the homogenisation of the phase-field through volume or a surface-average. For either homogenisation choices, the set of macro-scale and sub-scale equations, and the pertinent macro-homogeneity satisfying boundary conditions are established. As a special case, the concept of selective homogenisation is introduced, where the phase-field is chosen to live only in the RVE domain, thereby eliminating the macro-scale phase-field evolution equation. Numerical experiments demonstrate the local macro-scale material behaviour of the selective homogenisation based two-scale phase-field fracture model, while its non-selective counterpart yields a non-local macro-scale material behaviour

    Computational homogenisation and solution strategies for phase-field fracture

    Get PDF
    The computational modelling of fracture not only provides a deep insight into the underlying mechanisms that trigger a fracture but also offers information on the post-fracture behaviour (e.g., residual strength) of engineering materials and structures. In this context, the phase-field model for fracture is a popular approach, due to its ability to operate on fixed meshes without the need for explicit tracking of the fracture path, and the straight-forward handling of complex fracture topology. Nevertheless, the model does have its set of computational challenges viz., non-convexity of the energy functional, variational inequality due to fracture irreversibility, and the need for extremely fine meshes to resolve the fracture zone. In the first part of this thesis, two novel methods are proposed to tackle the fracture irreversibility, (i) a micromorphic approach that results in local irreversibile evolution of the phase-field, and (ii) a slack variable approach that replaces the fracture irreversibility inequality constraint with an equivalent equality constraint. Benchmark problems are solved using a monolithic Newton-Raphson solution technique to demonstrate the efficiency of both methods.The second aspect addressed in this thesis concerns multi-scale problems. In such problems, features such as the micro-cracks are extremely small (several orders of magnitude) compared to the structure itself. Resolving these features may result in a prohibitively computationally expensive problem. In order to address this issue, a computational homogenisation framework for the phase-field fracture is developed. The framework allows the computational of macro (engineering)-scale quantities using different homogenising (averaging) approaches over a microstructure. It is demonstrated that, based on the choice of the homogenisation approaches, local and non-local macro-scale material behaviour is obtained

    Adaptive control in rollforward recovery for extreme scale multigrid

    Full text link
    With the increasing number of compute components, failures in future exa-scale computer systems are expected to become more frequent. This motivates the study of novel resilience techniques. Here, we extend a recently proposed algorithm-based recovery method for multigrid iterations by introducing an adaptive control. After a fault, the healthy part of the system continues the iterative solution process, while the solution in the faulty domain is re-constructed by an asynchronous on-line recovery. The computations in both the faulty and healthy subdomains must be coordinated in a sensitive way, in particular, both under and over-solving must be avoided. Both of these waste computational resources and will therefore increase the overall time-to-solution. To control the local recovery and guarantee an optimal re-coupling, we introduce a stopping criterion based on a mathematical error estimator. It involves hierarchical weighted sums of residuals within the context of uniformly refined meshes and is well-suited in the context of parallel high-performance computing. The re-coupling process is steered by local contributions of the error estimator. We propose and compare two criteria which differ in their weights. Failure scenarios when solving up to 6.9â‹…10116.9\cdot10^{11} unknowns on more than 245\,766 parallel processes will be reported on a state-of-the-art peta-scale supercomputer demonstrating the robustness of the method

    Local Rollback for Resilient Mpi Applications With Application-Level Checkpointing and Message Logging

    Get PDF
    [Abstract] The resilience approach generally used in high-performance computing (HPC) relies on coordinated checkpoint/restart, a global rollback of all the processes that are running the application. However, in many instances, the failure has a more localized scope and its impact is usually restricted to a subset of the resources being used. Thus, a global rollback would result in unnecessary overhead and energy consumption, since all processes, including those unaffected by the failure, discard their state and roll back to the last checkpoint to repeat computations that were already done. The User Level Failure Mitigation (ULFM) interface – the last proposal for the inclusion of resilience features in the Message Passing Interface (MPI) standard – enables the deployment of more flexible recovery strategies, including localized recovery. This work proposes a local rollback approach that can be generally applied to Single Program, Multiple Data (SPMD) applications by combining ULFM, the ComPiler for Portable Checkpointing (CPPC) tool, and the Open MPI VProtocol system-level message logging component. Only failed processes are recovered from the last checkpoint, while consistency before further progress in the execution is achieved through a two-level message logging process. To further optimize this approach point-to-point communications are logged by the Open MPI VProtocol component, while collective communications are optimally logged at the application level—thereby decoupling the logging protocol from the particular collective implementation. This spatially coordinated protocol applied by CPPC reduces the log size, the log memory requirements and overall the resilience impact on the applications.This research was supported by the Ministry of Economy and Competitiveness of Spain and FEDER funds of the EU (Projects TIN2016-75845-P and the predoctoral grants of Nuria Losada ref. BES-2014-068066 and ref. EEBB-I-17-12005); by EU under the COST Program Action IC1305 Network for Sustainable Ultrascale Computing (NESUS) and a HiPEAC Collaboration Grant and by the Galician Government (Xunta de Galicia) under the Consolidation Program of Competitive Research (ref. ED431C 2017/04). We gratefully thank Galicia Supercomputing Center for providing access to the FinisTerrae-II supercomputer. This material is also based upon work supported by the US National Science Foundation, Office of Advanced Cyberinfrastructure , under Grants No. #1664142 and #1339763Xunta de Galicia; ED431C 2017/04US National Science Foundation, Office of Advanced Cyberinfrastructure; 1664142US National Science Foundation, Office of Advanced Cyberinfrastructure; 133976

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    Approachable Error Bounded Lossy Compression

    Get PDF
    Compression is commonly used in HPC applications to move and store data. Traditional lossless compression, however, does not provide adequate compression of floating point data often found in scientific codes. Recently, researchers and scientists have turned to lossy compression techniques that approximate the original data rather than reproduce it in order to achieve desired levels of compression. Typical lossy compressors do not bound the errors introduced into the data, leading to the development of error bounded lossy compressors (EBLC). These tools provide the desired levels of compression as mathematical guarantees on the errors introduced. However, the current state of EBLC leaves much to be desired. The existing EBLC all have different interfaces requiring codes to be changed to adopt new techniques; EBLC have many more configuration options than their predecessors, making them more difficult to use; and EBLC typically bound quantities like point wise errors rather than higher level metrics such as spectra, p-values, or test statistics that scientists typically use. My dissertation aims to provide a uniform interface to compression and to develop tools to allow application scientists to understand and apply EBLC. This dissertation proposal presents three groups of work: LibPressio, a standard interface for compression and analysis; FRaZ/LibPressio-Opt frameworks for the automated configuration of compressors using LibPressio; and work on tools for analyzing errors in particular domains

    Fostering Thinking Skills Through Creative Drama with Primary School Children with Learning Difficulties (LD) in Saudi Arabia

    Get PDF
    This study aimed to understand how the thinking skills of children with learning difficulties (LD) can be fostered by using ‘creative drama’ in the context of two primary schools for girls in Saudi Arabia. The educational vision of Saudi Arabia Vision 2030 emphasises the importance of the development of skills, such as thinking skills, in addition to knowledge to prepare children for a modern, 21st-century world. Within the Saudi educational system, relatively little attention has been paid to learners with LD, especially with thinking skills as a focus. The study utilised a design-based research approach involving multiple iterations of creative drama sessions incorporating different thinking skills, designed and co-led by the researcher and the teachers. The participants were 14 children with LD (ages 7 to 12) and two teachers with backgrounds in special educational needs. The study was designed in two phases. Phase One was carried out in School A to test and then revise the initial design principles empirically. The findings of this phase were an advanced version of the design principles, which then guided Phase Two in School B. The main findings of this intervention were introducing the elements of the dynamic and collaborative culture established through the use of creative drama for fostering thinking skills. The findings contribute to the empirical and theoretical field of fostering thinking skills using tested design principles for utilising ‘creative drama’ as a medium for teaching. The data were collected by multiple methods: teacher conversations, participant observations, focus groups, and a research journal. The findings suggest that using creative drama as a medium of learning might foster thinking skills by creating a dynamic and inclusive environment. Moreover, promoting the thinking skills of children with LD requires a balance between the facilitator’s role and the learners’ agency. It also requires a collaborative learning culture that supports the children emotionally and provides a safe atmosphere. This DBR concluded that the implementation of creative drama fostered the thinking skills of children with LD and allowed them to practise a variety of thinking skills in a safe, supportive environment and a collaborative culture. By considering the context of the Saudi educational system, this study suggests that there is a need to further investigate a thinking skills approach that supports learners with LD, and suggests the importance of investigating multi-modality and embodied cognition in special education, especially at the primary school level
    corecore