646 research outputs found

    Lattice QCD based on OpenCL

    Get PDF
    We present an OpenCL-based Lattice QCD application using a heatbath algorithm for the pure gauge case and Wilson fermions in the twisted mass formulation. The implementation is platform independent and can be used on AMD or NVIDIA GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double precision dslash implementation performs at 60 GFLOPS over a wide range of lattice sizes. The hybrid Monte-Carlo presented reaches a speedup of four over the reference code running on a server CPU.Comment: 19 pages, 11 figure

    On The Sample Complexity of Sparse Dictionary Learning

    Get PDF
    In the synthesis model signals are represented as a sparse combinations of atoms from a dictionary. Dictionary learning describes the acquisition process of the underlying dictionary for a given set of training samples. While ideally this would be achieved by optimizing the expectation of the factors over the underlying distribution of the training data, in practice the necessary information about the distribution is not available. Therefore, in real world applications it is achieved by minimizing an empirical average over the available samples. The main goal of this paper is to provide a sample complexity estimate that controls to what extent the empirical average deviates from the cost function. This estimate then provides a suitable estimate to the accuracy of the representation of the learned dictionary. The presented approach exemplifies the general results proposed by the authors in Sample Complexity of Dictionary Learning and other Matrix Factorizations, Gribonval et al. and gives more concrete bounds of the sample complexity of dictionary learning. We cover a variety of sparsity measures employed in the learning procedure.Comment: 4 pages, submitted to Statistical Signal Processing Workshop 201

    Sample Complexity of Dictionary Learning and other Matrix Factorizations

    Get PDF
    Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis (PCA), non-negative matrix factorization (NMF), KK-means clustering, etc., rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity \ldots), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to log⁥(n)/n\sqrt{\log(n)/n} w.r.t.\ the number of samples nn for the considered matrix factorization techniques.Comment: to appea

    Approach to qualify decision support maturity of new versus established impact assessment methods—demonstrated for the categories acidification and eutrophication

    Get PDF
    Purpose Initiatives like the EU Product Environmental Footprint have been pushing the discussion about the choice of life cycle impact assessment methods. Practitioners often prefer to use established methods for performance tracking, result stability, and consistency reasons. Method developers rather support newly developed methods. As case studies must provide consistent results in order to ensure reliable decision-making support, a systematic approach to qualify decision support maturity of newly developed impact assessment methods is needed. Methods A three step approach referring to key aspects for decision maturity was developed which takes the established life cycle impact assessment methods as a benchmark. In the first step, the underlying models of the methods and their respective differences are analyzed to capture the scope and detail of the characterization models. Second, the considered and available elementary flows covered by the methods are identified and compared to reveal consistent coverage, respectively, gaps between alternatives. In the third step, neglected elementary flows are evaluated with regard to their potential impact to the particular impact category. Furthermore, the characterization factors of concurring elementary flows are analyzed for significant differences in their shares. The developed approach was tested for LCIA methods for eutrophication and acidification in Europe. Results and discussion A systematic and practical qualification of decision support maturity can be achieved by a three-step approach benchmarking model scope, quantitative and qualitative coverage of elementary flows for new methods with established ones. For the application example, the established CML-IA method was compared with the ReCiPe method and the method of accumulated exceedance. These models vary regarding subdivision of environmental compartments, consideration of fate, as well as regionalization of characterization factors. The amount of covered elementary flows varies significantly as CML-IA covers about 28 more flows within the category acidification and about 35 more flows within the category eutrophication compared to ReCiPe and accumulated exceedance. The significance of all neglected elementary flows for the categories eutrophication and acidification is significant and represents a gap of up to 80 %. Furthermore, it was shown that the shares of some concurring elementary flows differ significantly. Conclusions The introduced approach allows the benchmarking of newly developed against established methods based on application-oriented criteria. It was demonstrated that significant differences between the methods exist. To guarantee reliable decision-making support, newly developed methods should not replace established ones until a minimum level of decision support maturity is reached

    Characterization of the Cradle to Cradle Certifiedℱ Products Program in the context of eco-labels and environmental declarations

    Get PDF
    (1) Background: The Cradle to Cradle Certifiedℱ Products Program (C2C Certified for short) is a scheme for the certification of products that meet the criteria and principles of the Cradle to CradleÂź design approach. The objective of this paper is to characterize C2C Certified as an instrument for external communication in the context of environmental labeling and declarations. (2) Method: An eco-label characterization scheme consisting of 22 attributes was used to analyze C2C Certified. In addition, it was compared with the established standardization labeling typologies, namely Type I and Type III. This was further illustrated in an example within the building and construction sector. (3) Results: C2C Certified can be classified neither as a Type I, nor a Type III label. The main weaknesses of C2C Certified from a labeling perspective are: the generic, but not product-specific focus of the awarding criteria, the lack of a life cycle perspective, and the incompletely transparent stakeholder involvement procedure. Nevertheless, for certain attributes (e.g., the awarding format), C2C Certified provides practical solutions and goes beyond a Type I eco-label. Substantial similarities between Type III declarations and C2C Certified cannot be identified. (4) Conclusions: The main advantages and shortcomings of C2C Certified from a labeling perspective are pointed out. The approach shows similarities to a Type I eco-label, and efforts toward conformance with the International Organization for Standardization (ISO) labelling standards would result in improving its comparability, recognition, and robustness.DFG, 325093850, Open Access Publizieren 2017 - 2018 / Technische UniversitĂ€t Berli

    Assessing the Ability of the Cradle to Cradle Certifiedℱ Products Program to Reliably Determine the Environmental Performance of Products

    Get PDF
    Concepts and tools supporting the design of environmentally friendly products (including materials, goods or services) have increased over the last years. The Cradle to Cradle Certifiedℱ Products Program (C2CP) is one of these approaches. In this work, the ability of C2CP to reliably determine the environmental performance of products was analyzed through the application of a criteria-based assessment scheme. Additionally, to compare C2CP with three other already established tools (life cycle assessment, product environmental footprint and material flow analysis), the same criteria-based scheme was applied. Results show that C2CP is not scientifically reliable enough to assure that certified products actually have a good environmental performance. The most relevant shortcoming of C2CP relates to its limited assessment scope, due to the fact that neither the entire life cycle of the product nor all relevant environmental impacts are covered. Based on already established tools and their practical implementation recommendations for increasing the reliability of C2CP are provided

    Product environmental footprint in policy and market decisions: Applicability and impact assessment

    Get PDF
    In April 2013, the European Commission published the Product and Organisation Environmental Footprint (PEF/OEF) methodology—a life cycle‐based multicriteria measure of the environmental performance of products, services, and organizations. With its approach of “comparability over flexibility,” the PEF/OEF methodology aims at harmonizing existing methods, while decreasing the flexibility provided by the International Organization for Standardization (ISO) standards regarding methodological choices. Currently, a 3‐y pilot phase is running, aiming at testing the methodology and developing product category and organization sector rules (PEFCR/OEFSR). Although a harmonized method is in theory a good idea, the PEF/OEF methodology presents challenges, including a risk of confusion and limitations in applicability to practice. The paper discusses the main differences between the PEF and ISO methodologies and highlights challenges regarding PEF applicability, with a focus on impact assessment. Some methodological aspects of the PEF and PEFCR Guides are found to contradict the ISO 14044 (2006) and ISO 14025 (2006). Others, such as prohibition of inventory cutoffs, are impractical. The evaluation of the impact assessment methods proposed in the PEF/OEF Guide showed that the predefined methods for water consumption, land use, and abiotic resources are not adequate because of modeling artefacts, missing inventory data, or incomplete characterization factors. However, the methods for global warming and ozone depletion perform very well. The results of this study are relevant for the PEF (and OEF) pilot phase, which aims at testing the PEF (OEF) methodology (and potentially adapting it) as well as addressing challenges and coping with them. Integr Environ Assess Manag 2015;11:417–424

    Gesturing Meaning: Non-action Words Activate the Motor System

    Get PDF
    Across cultures, speakers produce iconic gestures, which add – through the movement of the speakers’ hands – a pictorial dimension to the speakers’ message. These gestures capture not only the motor content but also the visuospatial content of the message. Here, we provide first evidence for a direct link between the representation of perceptual information and the motor system that can account for these observations. Across four experiments, participants’ hand movements captured both shapes that were directly perceived, and shapes that were only implicitly activated by unrelated semantic judgments of object words. These results were obtained even though the objects were not associated with any motor behaviors that would match the gestures the participants had to produce. Moreover, implied shape affected not only gesture selection processes but also their actual execution – as measured by the shape of hand motion through space – revealing intimate links between implied shape representation and motor output. The results are discussed in terms of ideomotor theories of action and perception, and provide one avenue for explaining the ubiquitous phenomenon of iconic gestures
    • 

    corecore