109 research outputs found

    Een overlayable stack-mechanisme

    Get PDF

    Fundamental Parameters of Eclipsing Binaries in the Kepler Field of View

    Get PDF
    Accurate knowledge of stellar parameters such as mass, radius, effective temperature, and composition inform our understanding of stellar evolution and constrain theoretical models. Binaries and, in particular, eclipsing binaries make it possible to measure directly these parameters without reliance on models or scaling relations. In this dissertation we derive fundamental parameters of stars in close binary systems with and without (detected) tertiary companions to test and inform theories of stellar and binary evolution. A subsample of 41 detached and semi-detached short-period eclipsing binaries observed by NASA’s Kepler mission and analyzed for eclipse timing variations form the basis of our sample. Radial velocities and spectroscopic orbits for these systems are derived from moderate resolution optical spectra and used to determine individual masses for 34 double-lined spectroscopic binaries, five of which have detected tertiaries. The resulting mass ratio M2/M1 distribution is bimodal, dominated by binaries with like-mass pairs and semi-detached classical Algol systems that have undergone mass transfer. A more detailed analysis of KIC 5738698, a detached binary consisting of two F-type main sequence stars with an orbital period of 4.8 days, uses the derived radial velocities to reconstruct the primary and secondary component spectra via Doppler tomography and derive atmospheric parameters for both stars. These parameters are then combined with Kepler photometry to obtain accurate masses and radii through light curve and radial velocity fitting with the binary modeling software ELC. A similar analysis is performed for KOI-81, a rapidly-rotating B-type star orbited by a low-mass white dwarf, using UV spectroscopy to identify the hot companion and determine masses and temperatures of both components. Well defined stellar parameters for KOI-81 and the other close binary systems examined in this dissertation enable detailed analyses of the physical attributes of systems in different evolutionary stages, providing important constraints for the formation and evolution of close binary systems

    An algebraic analysis of storage fragmentation

    Get PDF
    PhD thesisStorage fragmentation, the splitting of available computer memory space into separate gaps by allocations and deal locations of various sized blocks with consequent loss of utilisation due to reduced ability to satisfy reque~ts, has ~roved difficult to analyse. Most previous studies rely on simulation, and nearly all of the few published analyses that do not, simplify the combinatorial complexity that arises by some averaging assumption. After a survey of these results, an exact analytical approach to the study of storage allocation and fragmentation is presented. A model of an allocation scheme of a kind common in many computing systems is described. Requests from a saturated fi rst come fi rst served queue for varyi ng amounts of contiguous storage are satisfied as soon as sufficient space becomes available in a storage memory of fixed total size. A placement algorithm decides which free locations to allocate if a choice is possible. After a variable time, allocated requests are completed and their occupied storage is freed again. In general, the avail ab 1 e space becomes fragmented because allocated requests are not relocated ~r moved around in stora~e. The model's behaviour and in particul~r the storage utilisation are studied under conditions in which the model is a finite homogeneous Markov chain. The algebraic structure of its sparse transition matrix is discovered to have a striki~g recursive pattern, allowing the steady state equation to be simplified considerably and unexpectedly to a simple and direct statement of the effect of the choice of placement algorithm on the steady state. Possible developments and uses of this simplified analysis are indicated, and some investigated. The exact probabilistic behaviour of models of relatively small memory sizes is computed, and different placement algorithms are compared with each other and with the analytic results which are derived for the corresponding model in which relocation is allowed

    Applied logic : its use and implementation as a programming tool

    Get PDF
    The first Part of the thesis explains from first principles the concept of "logic programming" and its practical application in the programming language Prolog. Prolog is a simple but powerful language which encourages rapid, error-free programming and clear, readable, concise programs. The basic computational mechanism is a pattern matching process ("unification") operating on general record structures ("terms" of logic). IThe ideas are illustrated by describing in detail one sizable Prolog program which implements a simple compiler. The advantages and practicability of using Prolog for "real" compiler implementation are discussed. The second Part of the thesis describes techniques for implementing Prolog efficiently. In particular it is shown how to compile the patterns involved in the matching process into instructions of a low-level language. This idea has actually been implemented in a compiler (written in Prolog) from Prolog to DECsystem-10 assembly language. However the principles involved are explained more abstractly in terms of a "Prolog Machine". The code generated is comparable in speed with that produced by existing DEC10 Lisp compilers. Comparison is possible since pure Lisp can be viewed as a (rather restricted) subset of Prolog. It is argued that structured data objects, such as lists and trees, can be manipulated by pattern matching using a "structure 'sharing" representation as efficiently as by conventional selector and constructor functions operating on linked records in "heap" storage. Moreover the pattern matching formulation actually helps the implementor to produce a better implementation

    Data Mining by Grid Computing in the Search for Extrasolar Planets

    Get PDF
    A system is presented here to provide improved precision in ensemble differential photometry. This is achieved by using the power of grid computing to analyse astronomical catalogues. This produces new catalogues of optimised pointings for each star, which maximise the number and quality of reference stars available. Astronomical phenomena such as exoplanet transits and small-scale structure within quasars may be observed by means of millimagnitude photometric variability on the timescale of minutes to hours. Because of atmospheric distortion, ground-based observations of these phenomena require the use of differential photometry whereby the target is compared with one or more reference stars. CCD cameras enable the use of many reference stars in an ensemble. The more closely the reference stars in this ensemble resemble the target, the greater the precision of the photometry that can be achieved. The Locus Algorithm has been developed to identify the optimum pointing for a target and provide that pointing with a score relating to the degree of similarity between target and the reference stars. It does so by identifying potential points of aim for a particular telescope such that a given target and a varying set of references were included in a field of view centred on those pointings. A score is calculated for each such pointing. For each target, the pointing with the highest score is designated the optimum pointing. The application of this system to the Sloan Digital Sky Survey (SDSS) catalogue demanded the use of a High Performance Computing (HPC) solution through Grid Ireland. Pointings have thus been generated for 61,662,376 stars and 23,697 quasars

    Management of Long-Running High-Performance Persistent Object Stores

    Get PDF
    The popularity of object-oriented programming languages, such as Java and C++, for large application development has stirred an interest in improved technologies for high-performance, reliable, and scalable object storage. Such storage systems are typically referred to as Persistent Object Stores. This thesis describes the design and implementation of Sphere, a new persistent object store developed at the University of Glasgow, Scotland. The requirements for Sphere included high performance, support for transactional multi-threaded loads, scalability, extensibility, portability, reliability, referential integrity via the use of disk garbage collection, provision for flexible schema evolution, and minimised interaction with the mutator. The Sphere architecture is split into two parts: the core and the application-specific customisations. The core was designed to be modular, in order to encourage research and experimentation, and to be as light-weight as possible, in an attempt to achieve high performance through simplicity. The customisation part includes the code that deals with and is optimised for the specific load of the application that Sphere has to support: object formats, free-space management, etc. Even though specialising this part of the store is not trivial, it has the benefit that the interaction between the mutator and Sphere is direct and more efficient, as translation layers are not necessary. Major design decisions for Sphere included (i) splitting the store into partitions, to facilitate incremental disk garbage collection and schema evolution, (ii) using a flexible two-level free-space management, (Hi) introducing a three-dimensional method-dispatch matrix to invoke store operations, which contributes to Sphere's ease-of-extensibility, (iv) adopting a logical addressing scheme, to allow straightforward object and partition relocation, (v) requiring that Sphere can identify reference fields inside objects, so that it does not have to interact with the mutator in order to do so, and (vi) adopting the well-known ARIES recovery algorithm to ensure fault-tolerance. The thesis contains a detailed overview of Sphere and the context in which it was developed. Then, it concentrates on two areas that were explored using Sphere as the implementation platform. First, bulk object-loading issues are discussed and the Ghosted Allocation promotion algorithm is described. This algorithm was designed to allocate large numbers of objects to a store efficiently and with minimal log traffic and was evaluated using large-scale experiments. Second, the disk garbage collection framework of Sphere is overviewed and the implemented compacting, relocating garbage collector is described, along with the model of synchronisation with the mutator

    Aeronautical engineering: A special bibliography with indexes, supplement 80

    Get PDF
    This bibliography lists 277 reports, articles, and other documents introduced into the NASA scientific and technical information system in January 1977

    Advanced data management system analysis techniques study

    Get PDF
    The state of the art of system analysis is reviewed, emphasizing data management. Analytic, hardware, and software techniques are described

    Intelligent cell memory system for real time engineering applications

    Get PDF

    Types and polymorphism in persistent programming systems

    Get PDF
    • …
    corecore