47,557 research outputs found

    Towards a Continuous Record of the Sky

    Full text link
    It is currently feasible to start a continuous digital record of the entire sky sensitive to any visual magnitude brighter than 15 each night. Such a record could be created with a modest array of small telescopes, which collectively generate no more than a few Gigabytes of data daily. Alternatively, a few small telescopes could continually re-point to scan and reco rd the entire sky down to any visual magnitude brighter than 15 with a recurrence epoch of at most a few weeks, again always generating less than one Gigabyte of data each night. These estimates derive from CCD ability and budgets typical of university research projects. As a prototype, we have developed and are utilizing an inexpensive single-telescope system that obtains optical data from about 1500 square degrees. We discuss the general case of creating and storing data from a both an epochal survey, where a small number of telescopes continually scan the sky, and a continuous survey, composed of a constellation of telescopes dedicated each continually inspect a designated section of the sky. We compute specific limitations of canonical surveys in visible light, and estimate that all-sky continuous visual light surveys could be sensitive to magnitude 20 in a single night by about 2010. Possible scientific returns of continuous and epochal sky surveys include continued monitoring of most known variable stars, establishing case histories for variables of future interest, uncovering new forms of stellar variability, discovering the brightest cases of microlensing, discovering new novae and supernovae, discovering new counterparts to gamma-ray bursts, monitoring known Solar System objects, discovering new Solar System objects, and discovering objects that might strike the Earth.Comment: 38 pages, 9 postscript figures, 2 gif images. Revised and new section added. Accepted to PASP. Source code submitted to ASCL.ne

    A Unified Approach for Representing Structurally-Complex Models in SBML Level 3

    Get PDF
    The aim of this document is to explore a unified approach to handling several of the proposed extensions to the SBML Level 3 Core specification. The approach is illustrated with reference to Simile, a modelling environment which appears to have most of the capabilities of the various SBML Level 3 package proposals which deal with model structure. Simile (http://www.simulistics.com) is a visual modelling environment for continuous systems modelling which includes the ability to handle complex disaggregation of model structure, by allowing the modeller to specify classes of object and the relationships between them.

The note is organised around the 6 packages listed on the SBML Level 3 Proposals web page (http://sbml.org/Community/Wiki/SBML_Level_3_Proposals) which deal with model structure, namely comp, arrays, spatial, geom, dyn and multi. For each one, I consider how the requirements which motivated the package can be handled using Simile's unified approach. Although Simile has a declarative model-representation language (in both Prolog and XML syntax), I use Simile diagrams and equation syntax throughout, since this is more compact and readable than large chunks of XML.

The conclusion is that Simile can indeed meet most of the requirements of these various packages, using a generic set of constructs - basically, the multiple-instance submodel, the concept of a relationship (association) between submodels, and array variables. This suggests the possibility of having a single SBML Level 3 extension package similar to the Simile data model, rather than a series of separate packages. Such an approach has a number of potential advantages and disadvantages compared with having the current set of discrete packages: these are discussed in this paper

    A Survey on Array Storage, Query Languages, and Systems

    Full text link
    Since scientific investigation is one of the most important providers of massive amounts of ordered data, there is a renewed interest in array data processing in the context of Big Data. To the best of our knowledge, a unified resource that summarizes and analyzes array processing research over its long existence is currently missing. In this survey, we provide a guide for past, present, and future research in array processing. The survey is organized along three main topics. Array storage discusses all the aspects related to array partitioning into chunks. The identification of a reduced set of array operators to form the foundation for an array query language is analyzed across multiple such proposals. Lastly, we survey real systems for array processing. The result is a thorough survey on array data storage and processing that should be consulted by anyone interested in this research topic, independent of experience level. The survey is not complete though. We greatly appreciate pointers towards any work we might have forgotten to mention.Comment: 44 page

    Sussing merger trees: a proposed merger tree data format

    Get PDF
    We propose a common terminology for use in describing both temporal merger trees and spatial structure trees for dark-matter halos. We specify a unified data format in HDF5 and provide example I/O routines in C, FORTRAN and PYTHON

    Tensor-on-tensor regression

    Full text link
    We propose a framework for the linear prediction of a multi-way array (i.e., a tensor) from another multi-way array of arbitrary dimension, using the contracted tensor product. This framework generalizes several existing approaches, including methods to predict a scalar outcome from a tensor, a matrix from a matrix, or a tensor from a scalar. We describe an approach that exploits the multiway structure of both the predictors and the outcomes by restricting the coefficients to have reduced CP-rank. We propose a general and efficient algorithm for penalized least-squares estimation, which allows for a ridge (L_2) penalty on the coefficients. The objective is shown to give the mode of a Bayesian posterior, which motivates a Gibbs sampling algorithm for inference. We illustrate the approach with an application to facial image data. An R package is available at https://github.com/lockEF/MultiwayRegression .Comment: 33 pages, 3 figure

    Building Efficient Query Engines in a High-Level Language

    Get PDF
    Abstraction without regret refers to the vision of using high-level programming languages for systems development without experiencing a negative impact on performance. A database system designed according to this vision offers both increased productivity and high performance, instead of sacrificing the former for the latter as is the case with existing, monolithic implementations that are hard to maintain and extend. In this article, we realize this vision in the domain of analytical query processing. We present LegoBase, a query engine written in the high-level language Scala. The key technique to regain efficiency is to apply generative programming: LegoBase performs source-to-source compilation and optimizes the entire query engine by converting the high-level Scala code to specialized, low-level C code. We show how generative programming allows to easily implement a wide spectrum of optimizations, such as introducing data partitioning or switching from a row to a column data layout, which are difficult to achieve with existing low-level query compilers that handle only queries. We demonstrate that sufficiently powerful abstractions are essential for dealing with the complexity of the optimization effort, shielding developers from compiler internals and decoupling individual optimizations from each other. We evaluate our approach with the TPC-H benchmark and show that: (a) With all optimizations enabled, LegoBase significantly outperforms a commercial database and an existing query compiler. (b) Programmers need to provide just a few hundred lines of high-level code for implementing the optimizations, instead of complicated low-level code that is required by existing query compilation approaches. (c) The compilation overhead is low compared to the overall execution time, thus making our approach usable in practice for compiling query engines

    A Case for Redundant Arrays of Hybrid Disks (RAHD)

    Get PDF
    Hybrid Hard Disk Drive was originally concepted by Samsung, which incorporates a Flash memory in a magnetic disk. The combined ultra-high-density benefits of magnetic storage and the low-power and fast read access of NAND technology inspires us to construct Redundant Arrays of Hybrid Disks (RAHD) to offer a possible alternative to today’s Redundant Arrays of Independent Disks (RAIDs) and/or Massive Arrays of Idle Disks (MAIDs). We first design an internal management system (including Energy-Efficient Control) for hybrid disks. Three traces collected from real systems as well as a synthetic trace are then used to evaluate the RAHD arrays. The trace-driven experimental results show: in the high speed mode, a RAHD outplays the purely-magnetic-disk-based RAIDs by a factor of 2.4–4; in the energy-efficient mode, a RAHD4/5 can save up to 89% of energy at little performance degradationPeer reviewe

    Lenia and Expanded Universe

    Full text link
    We report experimental extensions of Lenia, a continuous cellular automata family capable of producing lifelike self-organizing autonomous patterns. The rule of Lenia was generalized into higher dimensions, multiple kernels, and multiple channels. The final architecture approaches what can be seen as a recurrent convolutional neural network. Using semi-automatic search e.g. genetic algorithm, we discovered new phenomena like polyhedral symmetries, individuality, self-replication, emission, growth by ingestion, and saw the emergence of "virtual eukaryotes" that possess internal division of labor and type differentiation. We discuss the results in the contexts of biology, artificial life, and artificial intelligence.Comment: 8 pages, 5 figures, 1 table; submitted to ALIFE 2020 conferenc

    A Memetic Analysis of a Phrase by Beethoven: Calvinian Perspectives on Similarity and Lexicon-Abstraction

    Get PDF
    This article discusses some general issues arising from the study of similarity in music, both human-conducted and computer-aided, and then progresses to a consideration of similarity relationships between patterns in a phrase by Beethoven, from the first movement of the Piano Sonata in A flat major op. 110 (1821), and various potential memetic precursors. This analysis is followed by a consideration of how the kinds of similarity identified in the Beethoven phrase might be understood in psychological/conceptual and then neurobiological terms, the latter by means of William Calvin’s Hexagonal Cloning Theory. This theory offers a mechanism for the operation of David Cope’s concept of the lexicon, conceived here as a museme allele-class. I conclude by attempting to correlate and map the various spaces within which memetic replication occurs
    • …
    corecore