49 research outputs found
Anthropogenic Space Weather
Anthropogenic effects on the space environment started in the late 19th
century and reached their peak in the 1960s when high-altitude nuclear
explosions were carried out by the USA and the Soviet Union. These explosions
created artificial radiation belts near Earth that resulted in major damages to
several satellites. Another, unexpected impact of the high-altitude nuclear
tests was the electromagnetic pulse (EMP) that can have devastating effects
over a large geographic area (as large as the continental United States). Other
anthropogenic impacts on the space environment include chemical release ex-
periments, high-frequency wave heating of the ionosphere and the interaction of
VLF waves with the radiation belts. This paper reviews the fundamental physical
process behind these phenomena and discusses the observations of their impacts.Comment: 71 pages, 35 figure
Selecting classification algorithms with active testing
Abstract. Given the large amount of data mining algorithms, their combinations (e.g. ensembles) and possible parameter settings, finding the most adequate method to analyze a new dataset becomes an ever more challenging task. This is because in many cases testing all possibly useful alternatives quickly becomes prohibitively expensive. In this paper we propose a novel technique, called active testing, that intelligently selects the most useful cross-validation tests. It proceeds in a tournament-style fashion, in each round selecting and testing the algorithm that is most likely to outperform the best algorithm of the previous round on the new dataset. This ‘most promising ’ competitor is chosen based on a history of prior duels between both algorithms on similar datasets. Each new cross-validation test will contribute information to a better estimate of dataset similarity, and thus better predict which algorithms are most promising on the new dataset. We also follow a different path to estimate dataset similarity based on data characteristics. We have evaluated this approach using a set of 292 algorithm-parameter combinations on 76 UCI datasets for classification. The results show that active testing will quickly yield an algorithm whose performance is very close to the optimum, after relatively few tests. It also provides a better solution than previously proposed methods. The variants of our method that rely on crossvalidation tests to estimate dataset similarity provides better solutions than those that rely on data characteristics.
Algorithm Selection as a Bandit Problem with Unbounded Losses
info:eu-repo/semantics/publishe