8,623 research outputs found

    Robustness - a challenge also for the 21st century: A review of robustness phenomena in technical, biological and social systems as well as robust approaches in engineering, computer science, operations research and decision aiding

    Get PDF
    Notions on robustness exist in many facets. They come from different disciplines and reflect different worldviews. Consequently, they contradict each other very often, which makes the term less applicable in a general context. Robustness approaches are often limited to specific problems for which they have been developed. This means, notions and definitions might reveal to be wrong if put into another domain of validity, i.e. context. A definition might be correct in a specific context but need not hold in another. Therefore, in order to be able to speak of robustness we need to specify the domain of validity, i.e. system, property and uncertainty of interest. As proofed by Ho et al. in an optimization context with finite and discrete domains, without prior knowledge about the problem there exists no solution what so ever which is more robust than any other. Similar to the results of the No Free Lunch Theorems of Optimization (NLFTs) we have to exploit the problem structure in order to make a solution more robust. This optimization problem is directly linked to a robustness/fragility tradeoff which has been observed in many contexts, e.g. 'robust, yet fragile' property of HOT (Highly Optimized Tolerance) systems. Another issue is that robustness is tightly bounded to other phenomena like complexity for which themselves exist no clear definition or theoretical framework. Consequently, this review rather tries to find common aspects within many different approaches and phenomena than to build a general theorem for robustness, which anyhow might not exist because complex phenomena often need to be described from a pluralistic view to address as many aspects of a phenomenon as possible. First, many different robustness problems have been reviewed from many different disciplines. Second, different common aspects will be discussed, in particular the relationship of functional and structural properties. This paper argues that robustness phenomena are also a challenge for the 21st century. It is a useful quality of a model or system in terms of the 'maintenance of some desired system characteristics despite fluctuations in the behaviour of its component parts or its environment' (s. [Carlson and Doyle, 2002], p. 2). We define robustness phenomena as solution with balanced tradeoffs and robust design principles and robustness measures as means to balance tradeoffs. --

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Bayesian Compression for Deep Learning

    Get PDF
    Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.Comment: Published as a conference paper at NIPS 201

    Stage Configuration for Capital Goods:Supporting Order Capturing in Mass Customization

    Get PDF

    Assessing China’s Carbon Intensity Pledge for 2020: Stringency and Credibility Issues and their Implications

    Get PDF
    Just prior to the Copenhagen climate summit, China pledged to cut its carbon intensity by 40-45% by 2020 relative to its 2005 levels to help to reach an international climate change agreement at Copenhagen or beyond. This raises the issue of whether such a pledge is ambitious or just represents business as usual. To put China’s climate pledge into perspective, this paper examines whether this proposed carbon intensity goal for 2020 is as challenging as the energy-saving goals set in the current 11th five-year economic blueprint, to what extent it drives China’s emissions below its projected baseline levels, and whether China will fulfill its part of a coordinated global commitment to stabilize the concentration of greenhouse gas emissions in the atmosphere at the desirable level. Given that China’s pledge is in the form of carbon intensity, the paper shows that GDP figures are even more crucial to the impacts on the energy or carbon intensity than are energy consumption and emissions data by examining the revisions of China’s GDP figures and energy consumption in recent years. Moreover, the paper emphasizes that China’s proposed carbon intensity target not only needs to be seen as ambitious, but more importantly it needs to be credible. Finally, it is concluded with a suggestion that international climate change negotiations need to focus on 2030 as the targeted date to cap the greenhouse gas emissions of the world’s two largest emitters in a legally binding global agreement.Carbon Intensity, Post-Copenhagen Climate Change Negotiations, Climate Commitments, China

    U-DeepONet: U-Net Enhanced Deep Operator Network for Geologic Carbon Sequestration

    Full text link
    FNO and DeepONet are by far the most popular neural operator learning algorithms. FNO seems to enjoy an edge in popularity due to its ease of use, especially with high dimensional data. However, a lesser-acknowledged feature of DeepONet is its modularity. This feature allows the user the flexibility of choosing the kind of neural network to be used in the trunk and/or branch of the DeepONet. This is beneficial because it has been shown many times that different types of problems require different kinds of network architectures for effective learning. In this work, we will take advantage of this feature by carefully designing a more efficient neural operator based on the DeepONet architecture. We introduce U-Net enhanced DeepONet (U-DeepONet) for learning the solution operator of highly complex CO2-water two-phase flow in heterogeneous porous media. The U-DeepONet is more accurate in predicting gas saturation and pressure buildup than the state-of-the-art U-Net based Fourier Neural Operator (U-FNO) and the Fourier-enhanced Multiple-Input Operator (Fourier-MIONet) trained on the same dataset. In addition, the proposed U-DeepONet is significantly more efficient in training times than both the U-FNO (more than 18 times faster) and the Fourier-MIONet (more than 5 times faster), while consuming less computational resources. We also show that the U-DeepONet is more data efficient and better at generalization than both the U-FNO and the Fourier-MIONet

    Formalized Identification Of Key Factors In Safety-Relevant Failure Scenarios

    Full text link
    This research article presents a methodical data-based approach to systematically identify key factors in safety-related failure scenarios, with a focus on complex product-environmental systems in the era of Industry 4.0. The study addresses the uncertainty arising from the growing complexity of modern products. The method uses scenario analysis and focuses on failure analysis within technical product development. The approach involves a derivation of influencing factors based on information from failure databases. The failures described here are documented individually in failure sequence diagrams and then related to each other in a relationship matrix. This creates a network of possible failure scenarios from individual failure cases that can be used in product development. To illustrate the application of the methodology, a case study of 41 Rapex safety alerts for a hair dryer is presented. The failure sequence diagrams and influencing factor relationship matrices show 46 influencing factors that lead to safety-related failures. The predominant harm is burns and electric shocks, which are highlighted by the active and passive sum diagrams. The research demonstrates a robust method for identifying key factors in safety-related failure scenarios using information from failure databases. The methodology provides valuable insights into product development and emphasizes the frequency of influencing factors and their interconnectedness
    corecore