28 research outputs found

    Self-organizing lists on the Xnet

    Get PDF
    The first parallel designs for implementing self-organizing lists on the Xnet interconnection network are presented. Self-organizing lists permute the order of list entries after an entry is accessed according to some update hueristic. The heuristic attempts to place frequently requested entries closer to the front of the list. This paper outlines Xnet systems for self-organizing lists under the move-to-front and transpose update heuristics. Our novel designs can be used to achieve high-speed lossless text compression

    The NASA Exoplanet Archive: Data and Tools for Exoplanet Research

    Full text link
    We describe the contents and functionality of the NASA Exoplanet Archive, a database and tool set funded by NASA to support astronomers in the exoplanet community. The current content of the database includes interactive tables containing properties of all published exoplanets, Kepler planet candidates, threshold-crossing events, data validation reports and target stellar parameters, light curves from the Kepler and CoRoT missions and from several ground-based surveys, and spectra and radial velocity measurements from the literature. Tools provided to work with these data include a transit ephemeris predictor, both for single planets and for observing locations, light curve viewing and normalization utilities, and a periodogram and phased light curve service. The archive can be accessed at http://exoplanetarchive.ipac.caltech.edu.Comment: Accepted for publication in the Publications of the Astronomical Society of the Pacific, 4 figure

    The NASA Exoplanet Science Institute Archives: KOA and NStED

    Get PDF
    The NASA Exoplanet Science Institute (NExScI) maintains a series of archival services in support of NASA’s planet finding and characterization goals. Two of the larger archival services at NExScI are the Keck Observatory Archive (KOA) and the NASA Star and Exoplanet Database (NStED). KOA, a collaboration between the W. M. Keck Observatory and NExScI, serves raw data from the High Resolution Echelle Spectrograph (HIRES) and extracted spectral browse products. As of June 2009, KOA hosts over 28 million files (4.7 TB) from over 2,000 nights. In Spring 2010, it will begin to serve data from the Near-Infrared Echelle Spectrograph (NIRSPEC). NStED is a general purpose archive with the aim of providing support for NASA’s planet finding and characterization goals, and stellar astrophysics. There are two principal components of NStED: a database of (currently) all known exoplanets, and images; and an archive dedicated to high precision photometric surveys for transiting exoplanets. NStED is the US portal to the CNES mission CoRoT, the first space mission dedicated to the discovery and characterization of exoplanets. These archives share a common software and hardware architecture with the NASA/IPAC Infrared Science Archive (IRSA). The software architecture consists of standalone utilities that perform generic query and retrieval functions. They are called through program interfaces and plugged together to form applications through a simple executive library

    The Grizzly, February 8, 1985

    Get PDF
    Ursinus Grading System a Problem? • Former DA Lectures on Alcohol • Library Abuse Called Academic Dishonesty • Suspected Conspiracy Makes Zack\u27s Rest Uneasy • The Wismer Food Groups • CP & P Urges Students to Investigate Intern Options • Campus Life Considers Problems With Proposed Co-ed Dorms • Intramural Program Expands • Faculty Member Exhibits Art Work in Myrin • Heads Bring Magic to The Movies • Model U.N. • Scholarship Announced • Women Cagers Defeat Swarthmore • Grapplers Drop Two, Win One • Pharmacy Stops B-ball Streak • Badminton Beats Harcum, Loses to Rosemont • Fond Memories of The Bull • Lorelei Tonight • Lantern Offers Prize for Best Poem • Blockson to Speakhttps://digitalcommons.ursinus.edu/grizzlynews/1132/thumbnail.jp

    Universal Properties of Mythological Networks

    Full text link
    As in statistical physics, the concept of universality plays an important, albeit qualitative, role in the field of comparative mythology. Here we apply statistical mechanical tools to analyse the networks underlying three iconic mythological narratives with a view to identifying common and distinguishing quantitative features. Of the three narratives, an Anglo-Saxon and a Greek text are mostly believed by antiquarians to be partly historically based while the third, an Irish epic, is often considered to be fictional. Here we show that network analysis is able to discriminate real from imaginary social networks and place mythological narratives on the spectrum between them. Moreover, the perceived artificiality of the Irish narrative can be traced back to anomalous features associated with six characters. Considering these as amalgams of several entities or proxies, renders the plausibility of the Irish text comparable to the others from a network-theoretic point of view.Comment: 6 pages, 3 figures, 2 tables. Updated to incorporate corrections from EPL acceptance proces

    Dictionary Compression on the PRAM

    No full text
    Parallel algorithms for lossless data compression via dictionary compression using optimal, longest fragment first (LFF), and greedy parsing strategies are described. Dictionary compression removes redundancy by replacing substrings of the input by references to strings stored in a dictionary. Given a static dictionary stored as a suffix tree, we present a CREW PRAM algorithm for optimal compression which runs in O(M + log M log n) time with O(nM 2) processors, where it is assumed that M is the maximum length of any dictionary entry. Under the same model, we give an algorithm for LFF compression which runs in O(log 2 n) time with O(n = log n) processors where it is assumed that the maximum dictionary entry is of length O(log n). We also describe an O(M + log n) time and O(n) processor algorithm for greedy parsing given a static or sliding-window dictionary. For sliding-window compression, a di erent approach finds the greedy parsing in O(log n) time using O(nM log M = log n) processors. Our algorithms are practical in the sense that their analysis elicits small constant
    corecore