5,000 research outputs found

    Scratches from the Past: Inflationary Archaeology through Features in the Power Spectrum of Primordial Fluctuations

    Full text link
    Inflation may provide unique insight into the physics at the highest available energy scales that cannot be replicated in any realistic terrestrial experiment. Features in the primordial power spectrum are generically predicted in a wide class of models of inflation and its alternatives, and are observationally one of the most overlooked channels for finding evidence for non-minimal inflationary models. Constraints from observations of the cosmic microwave background cover the widest range of feature frequencies, but the most sensitive constraints will come from future large-scale structure surveys that can measure the largest number of linear and quasi-linear modes.Comment: 5 pages + references, 1 figure; science white paper submitted to the Astro2020 decadal surve

    Multi-label Ferns for Efficient Recognition of Musical Instruments in Recordings

    Full text link
    In this paper we introduce multi-label ferns, and apply this technique for automatic classification of musical instruments in audio recordings. We compare the performance of our proposed method to a set of binary random ferns, using jazz recordings as input data. Our main result is obtaining much faster classification and higher F-score. We also achieve substantial reduction of the model size

    Informed selection and use of training examples for knowledge refinement.

    Get PDF
    Knowledge refinement tools seek to correct faulty rule-based systems by identifying and repairing faults indicated by training examples that provide evidence of faults. This thesis proposes mechanisms that improve the effectiveness and efficiency of refinement tools by the best use and selection of training examples. The refinement task is sufficiently complex that the space of possible refinements demands a heuristic search. Refinement tools typically use hill-climbing search to identify suitable repairs but run the risk of getting caught in local optima. A novel contribution of this thesis is solving the local optima problem by converting the hill-climbing search into a best-first search that can backtrack to previous refinement states. The thesis explores how different backtracking heuristics and training example ordering heuristics affect refinement effectiveness and efficiency. Refinement tools rely on a representative set of training examples to identify faults and influence repair choices. In real environments it is often difficult to obtain a large set of training examples, since each problem-solving task must be labelled with the expert's solution. Another novel aspect introduced in this thesis is informed selection of examples for knowledge refinement, where suitable examples are selected from a set of unlabelled examples, so that only the subset requires to be labelled. Conversely, if a large set of labelled examples is available, it still makes sense to have mechanisms that can select a representative set of examples beneficial for the refinement task, thereby avoiding unnecessary example processing costs. Finally, an experimental evaluation of example utilisation and selection strategies on two artificial domains and one real application are presented. Informed backtracking is able to effectively deal with local optima by moving search to more promising areas, while informed ordering of training examples reduces search effort by ensuring that more pressing faults are dealt with early on in the search. Additionally, example selection methods achieve similar refinement accuracy with significantly fewer examples

    Unveiling the Dynamics of the Universe

    Full text link
    We explore the dynamics and evolution of the Universe at early and late times, focusing on both dark energy and extended gravity models and their astrophysical and cosmological consequences. Modified theories of gravity not only provide an alternative explanation for the recent expansion history of the universe, but they also offer a paradigm fundamentally distinct from the simplest dark energy models of cosmic acceleration. In this review, we perform a detailed theoretical and phenomenological analysis of different modified gravity models and investigate their consistency. We also consider the cosmological implications of well motivated physical models of the early universe with a particular emphasis on inflation and topological defects. Astrophysical and cosmological tests over a wide range of scales, from the solar system to the observable horizon, severely restrict the allowed models of the Universe. Here, we review several observational probes -- including gravitational lensing, galaxy clusters, cosmic microwave background temperature and polarization, supernova and baryon acoustic oscillations measurements -- and their relevance in constraining our cosmological description of the Universe.Comment: 94 pages, 14 figures. Review paper accepted for publication in a Special Issue of Symmetry. "Symmetry: Feature Papers 2016". V2: Matches published version, now 79 pages (new format

    Mineralogy and distribution of critical elements in the Sn–W–Pb–Ag–Zn Huanuni deposit, Bolivia

    Get PDF
    The polymetallic Huanuni deposit, a world-class tin deposit, is part of the Bolivian tin belt. As a likely case for a “mesothermal” or transitional deposit between epithermal and porphyry Sn types (or shallow porphyry Sn), it represents a case that contributes significantly to the systematic study of the distribution of critical elements within the “family” of Bolivian tin deposits. In addition to Sn, Zn and Ag, further economic interest in the area resides in its potential in critical elements such as In, Ga and Ge. This paper provides the first systematic characterisation of the complex mineralogy and mineral chemistry of the Huanuni deposit with the twofold aim of identifying the mineral carriers of critical elements and endeavouring plausible metallogenic processes for the formation of this deposit, by means of a multi-methodological approach. With In concentrations consistently over 2000 ppm, the highest potential for relevant concentrations in this metal resides in widespread tin minerals (cassiterite and stannite) and sphalerite. Hypogene alteration assemblages are hardly developed due to the metasedimentary nature of host rocks, but the occurrence of potassium feldspar, schorl, pyrophyllite and dickite as vein material stand for potassic to phyllic or advanced argillic alteration assemblages and relatively high-temperature (and low pH) mineralising fluids. District-scale mineralogical zonation suggests a thermal zonation with decreasing temperatures from the central to the peripheral areas. A district-scale zonation has been also determined for d34SVCDT values, which range -7.2‰ to 0.2‰ (mostly -7‰ to -5‰) in the central area and -4.2‰ to 1.0‰ (mainly constrained between -2‰ and 1‰) in peripheral areas. Such values stand for magmatic and metasedimentary sources for sulfur, and their spatial zoning may be related to differential reactivity between mineralising fluids and host rocks, outwardly decreasing from the central to the peripheral areasPeer ReviewedPostprint (published version

    Probabilistic and Semantic Descriptions of Image Manifolds and Their Applications

    Full text link
    This paper begins with a description of methods for estimating probability density functions for images that reflects the observation that such data is usually constrained to lie in restricted regions of the high-dimensional image space - not every pattern of pixels is an image. It is common to say that images lie on a lower-dimensional manifold in the high-dimensional space. However, although images may lie on such lower-dimensional manifolds, it is not the case that all points on the manifold have an equal probability of being images. Images are unevenly distributed on the manifold, and our task is to devise ways to model this distribution as a probability distribution. In pursuing this goal, we consider generative models that are popular in AI and computer vision community. For our purposes, generative/probabilistic models should have the properties of 1) sample generation: it should be possible to sample from this distribution according to the modelled density function, and 2) probability computation: given a previously unseen sample from the dataset of interest, one should be able to compute the probability of the sample, at least up to a normalising constant. To this end, we investigate the use of methods such as normalising flow and diffusion models. We then show that such probabilistic descriptions can be used to construct defences against adversarial attacks. In addition to describing the manifold in terms of density, we also consider how semantic interpretations can be used to describe points on the manifold. To this end, we consider an emergent language framework which makes use of variational encoders to produce a disentangled representation of points that reside on a given manifold. Trajectories between points on a manifold can then be described in terms of evolving semantic descriptions.Comment: 23 pages, 17 figures, 1 tabl

    Novelty Detection And Cluster Analysis In Time Series Data Using Variational Autoencoder Feature Maps

    Get PDF
    The identification of atypical events and anomalies in complex data systems is an essential yet challenging task. The dynamic nature of these systems produces huge volumes of data that is often heterogeneous, and the failure to account for this will impede the detection of anomalies. Time series data encompass these issues and its high dimensional nature intensifies these challenges. This research presents a framework for the identification of anomalies in temporal data. A comparative analysis of Centroid, Density and Neural Network-based clustering techniques was performed and their scalability was assessed. This facilitated the development of a new algorithm called the Variational Autoencoder Feature Map (VAEFM) which is an ensemble method that is based on Kohonen’s Self-Organizing Maps (SOM) and Variational Autoencoders. The VAEFM is an unsupervised learning algorithm that models the distribution of temporal data without making a priori assumptions. It incorporates principles of novelty detection to enhance the representational capacity of SOMs neurons, which improves their ability to generalize with novel data. The VAEFM technique was demonstrated on a dataset of accumulated aircraft sensor recordings, to detect atypical events that transpired in the approach phase of flight. This is a proactive means of accident prevention and is therefore advantageous to the Aviation industry. Furthermore, accumulated aircraft data presents big data challenges, which requires scalable analytical solutions. The results indicated that VAEFM successfully identified temporal dependencies in the flight data and produced several clusters and outliers. It analyzed over 2500 flights in under 5 minutes and identified 12 clusters, two of which contained stabilized approaches. The remaining comprised of aborted approaches, excessively high/fast descent patterns and other contributory factors for unstabilized approaches. Outliers were detected which revealed oscillations in aircraft trajectories; some of which would have a lower detection rate using traditional flight safety analytical techniques. The results further indicated that VAEFM facilitates large-scale analysis and its scaling efficiency was demonstrated on a High Performance Computing System, by using an increased number of processors, where it achieved an average speedup of 70%
    corecore