9,302 research outputs found
HepSim: a repository with predictions for high-energy physics experiments
A file repository for calculations of cross sections and kinematic
distributions using Monte Carlo generators for high-energy collisions is
discussed. The repository is used to facilitate effective preservation and
archiving of data from theoretical calculations, as well as for comparisons
with experimental data. The HepSim data library is publicly accessible and
includes a number of Monte Carlo event samples with Standard Model predictions
for current and future experiments. The HepSim project includes a software
package to automate the process of downloading and viewing online Monte Carlo
event samples. A data streaming over a network for end-user analysis is
discussed.Comment: 12 pages, 2 figure
Extensions of an Empirical Automated Tuning Framework
Empirical auto-tuning has been successfully applied to scientific computing applications and web-based cluster servers over the last few years. However, few studies are focused on applying this method on optimizing the performance of database systems. In this thesis, we present a strategy that uses Active Harmony, an empirical automated tuning framework to optimize the throughput of PostgreSQL server by tuning its settings such as memory and buffer sizes. We used Nelder-Mead simplex method as the search engine, and we showed how our strategy performs compared to the hand-tuned and default results.
Another part of this thesis focuses on using data from prior runs of auto-tuning. Prior data has been proved to be useful in many cases, such as modeling the search space or finding a good starting point for hill-climbing. We present several methods that were developed to manage the prior data in Active Harmony. Our intention was to provide tuners a complete set of information for their tuning tasks
Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R
This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud
To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud
A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
In this work we present In-Place Activated Batch Normalization (InPlace-ABN)
- a novel approach to drastically reduce the training memory footprint of
modern deep neural networks in a computationally efficient way. Our solution
substitutes the conventionally used succession of BatchNorm + Activation layers
with a single plugin layer, hence avoiding invasive framework surgery while
providing straightforward applicability for existing deep learning frameworks.
We obtain memory savings of up to 50% by dropping intermediate results and by
recovering required information during the backward pass through the inversion
of stored forward results, with only minor increase (0.8-2%) in computation
time. Also, we demonstrate how frequently used checkpointing approaches can be
made computationally as efficient as InPlace-ABN. In our experiments on image
classification, we demonstrate on-par results on ImageNet-1k with
state-of-the-art approaches. On the memory-demanding task of semantic
segmentation, we report results for COCO-Stuff, Cityscapes and Mapillary
Vistas, obtaining new state-of-the-art results on the latter without additional
training data but in a single-scale and -model scenario. Code can be found at
https://github.com/mapillary/inplace_abn
Recommended from our members
A Clustering System for Dynamic Data Streams Based on Metaheuristic Optimisation
open access articleThis article presents the Optimised Stream clustering algorithm (OpStream), a novel approach to cluster dynamic data streams. The proposed system displays desirable features, such as a low number of parameters and good scalability capabilities to both high-dimensional data and numbers of clusters in the dataset, and it is based on a hybrid structure using deterministic clustering methods and stochastic optimisation approaches to optimally centre the clusters. Similar to other state-of-the-art methods available in the literature, it uses “microclusters” and other established techniques, such as density based clustering. Unlike other methods, it makes use of metaheuristic optimisation to maximise performances during the initialisation phase, which precedes the classic online phase. Experimental results show that OpStream outperforms the state-of-the-art methods in several cases, and it is always competitive against other comparison algorithms regardless of the chosen optimisation method. Three variants of OpStream, each coming with a different optimisation algorithm, are presented in this study. A thorough sensitive analysis is performed by using the best variant to point out OpStream’s robustness to noise and resiliency to parameter changes
Adaptive Index Buffer
With rapidly increasing datasets and more dynamic workloads, adaptive partial indexing becomes an important way to keep indexing efficiently. During times of changing workloads, the query performance suffers from inefficient tables scans while the index tuning mechanism adapts the partial index. In this paper we present the Adaptive Index Buffer. The Adaptive Index Buffer reduces the cost of table scans by quickly indexing tuples in memory until the partial index has adapted to the workload again. We explain the basic operating mode of an Index Buffer and discuss how it adapts to changing workload situations. Further, we present three experiments that show the Index Buffer at work
- …