9,302 research outputs found

    HepSim: a repository with predictions for high-energy physics experiments

    Get PDF
    A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations, as well as for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. A data streaming over a network for end-user analysis is discussed.Comment: 12 pages, 2 figure

    Extensions of an Empirical Automated Tuning Framework

    Get PDF
    Empirical auto-tuning has been successfully applied to scientific computing applications and web-based cluster servers over the last few years. However, few studies are focused on applying this method on optimizing the performance of database systems. In this thesis, we present a strategy that uses Active Harmony, an empirical automated tuning framework to optimize the throughput of PostgreSQL server by tuning its settings such as memory and buffer sizes. We used Nelder-Mead simplex method as the search engine, and we showed how our strategy performs compared to the hand-tuned and default results. Another part of this thesis focuses on using data from prior runs of auto-tuning. Prior data has been proved to be useful in many cases, such as modeling the search space or finding a good starting point for hill-climbing. We present several methods that were developed to manage the prior data in Active Harmony. Our intention was to provide tuners a complete set of information for their tuning tasks

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

    Full text link
    In this work we present In-Place Activated Batch Normalization (InPlace-ABN) - a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as InPlace-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report results for COCO-Stuff, Cityscapes and Mapillary Vistas, obtaining new state-of-the-art results on the latter without additional training data but in a single-scale and -model scenario. Code can be found at https://github.com/mapillary/inplace_abn

    Adaptive Index Buffer

    Get PDF
    With rapidly increasing datasets and more dynamic workloads, adaptive partial indexing becomes an important way to keep indexing efficiently. During times of changing workloads, the query performance suffers from inefficient tables scans while the index tuning mechanism adapts the partial index. In this paper we present the Adaptive Index Buffer. The Adaptive Index Buffer reduces the cost of table scans by quickly indexing tuples in memory until the partial index has adapted to the workload again. We explain the basic operating mode of an Index Buffer and discuss how it adapts to changing workload situations. Further, we present three experiments that show the Index Buffer at work
    • …
    corecore