1,111 research outputs found

    Signal processing methodologies for an acoustic fetal heart rate monitor

    Get PDF
    Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use

    A Bio-Optical Model for \u3ci\u3eSyringodium filiforme\u3c/i\u3e Canopies

    Get PDF
    Seagrasses are significant ecological and biogeochemical agents in shallow water ecosystems throughout the world. In many regions, seagrass meadows occupy a sufficient fraction of the coastal zone, and generate optical signatures that can be observed from space. Bio-optical models of light absorption and scattering by submerged plant canopies for certain species such as Thalassia testudinum and Zostera marina have successfully modeled the plane irradiance distribution and photosynthesis within the submerged canopies. Syringodium filiforme differs &om T. testudinttm and Z marina, in leaf morphology and canopy architecture. The objective of this study was to develop a radiative transfer model that accurately predicts the light absorbed and reflected by the canopy of this morphologically unique, and abundant tropical seagrass. The approach involved modifying Zimmerman\u27s (2003) flat leaf bio-optical model by incorporating the unique vertical biomass distribution of S. filiforme. Leaf length frequency data along with the assumption of a spherical canopy allowed the parameterization of the unique architecture of the seagrass canopy. Model predictions of downwelling irradiance and attenuation coefficients within the Syringodium filiforme canopies were consistent with field measurements, therefore providing a robust tool for predicting photosynthetic performance of these seagrass canopies. Model predictions of top of the canopy upwelling irradiances, as well as top of the canopy reflectances were also consistent with field measurements. This predictive understanding will help to develop global algorithms for remote sensing of the abundance and productivity of this species that will lead to better coastal management practices

    On duality relations for session types

    Get PDF
    Session types are a type formalism used to describe communication protocols over private session channels. Each participant in a binary session owns one endpoint of a session channel. A key notion is that of duality: the endpoints of a session channel should have dual session types in order to guarantee communication safety. Duality relations have been independently defined in different ways and different works, without considering their effect on the type system. In this paper we systematically study the existing duality relations and some new ones, and compare them in order to understand their expressiveness. The outcome is that those relations are split into two groups, one related to the na¨ıve inductive duality, and the other related to a notion of mutual compliance, which we borrow from the literature on contracts for web-services

    A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration

    Get PDF
    The modeling and design of a fault-tolerant multiprocessor system is addressed. In particular, the behavior of the system during recovery and restoration after a fault has occurred is investigated. Given that a multicomputer system is designed using the Algorithm to Architecture to Mapping Model (ATAMM), and that a fault (death of a computing resource) occurs during its normal steady-state operation, a model is presented as a viable research tool for predicting the performance bounds of the system during its recovery and restoration phases. Furthermore, the bounds of the performance behavior of the system during this transient mode can be assessed. These bounds include: time to recover from the fault (t(sub rec)), time to restore the system (t(sub rec)) and whether there is a permanent delay in the system's Time Between Input and Output (TBIO) after the system has reached a steady state. An implementation of an ATAMM based computer was developed with the Generic VHSIC Spaceborne Computer (GVSC) as the target system. A simulation of the GVSC was also written based on the code used in ATAMM Multicomputer Operating System (AMOS). The simulation is in turn used to validate the new model in the usefulness and accuracy in tracking the propagation of the delay through the system and predicting the behavior in the transient state of recovery and restoration. The model is validated as an accurate method to predict the transient behavior of an ATAMM based multicomputer during recovery and restoration

    A Performance Prediction Model for a Fault-Tolerant Computer During Recovery and Restoration

    Get PDF
    The modeling and design of a fault-tolerant multiprocessor system is addressed. Of interest is the behavior of the system during recovery and restoration after a fault has occurred. The multiprocessor systems are based on the Algorithm to Architecture Mapping Model (ATAMM) and the fault considered is the death of a processor. The developed model is useful in the determination of performance bounds of the system during recovery and restoration. The performance bounds include time to recover from the fault, time to restore the system, and determination of any permanent delay in the input to output latency after the system has regained steady state. Implementation of an ATAMM based computer was developed for a four-processor generic VHSIC spaceborne computer (GVSC) as the target system. A simulation of the GVSC was also written on the code used in the ATAMM Multicomputer Operating System (AMOS). The simulation is used to verify the new model for tracking the propagation of the delay through the system and predicting the behavior of the transient state of recovery and restoration. The model is shown to accurately predict the transient behavior of an ATAMM based multicomputer during recovery and restoration

    Longitudinal patterns in an Arkansas River Valley stream: an Application of the River Continuum Concept

    Get PDF
    The River Continuum Concept (RCC) provides the framework for studying how lotic ecosystems vary from headwater streams to large rivers. The RCC was developed in streams in eastern deciduous forests of North America, but watershed characteristics and land uses differ across ecoregions, presenting unique opportunities to study how predictions of the RCC may differ across regions. Additionally, RCC predictions may vary due to the influence of fishes, but few studies have used fish taxa as a metric for evaluating predictions of the RCC. Our goal was to determine if RCC predictions for stream orders 1 through 5 were supported by primary producer, macroinvertebrate, and fish communities in Cadron Creek of the Arkansas River Valley. We sampled chlorophyll a, macroinvertebrates, and fishes at five stream reaches across a gradient of watershed size. Contrary to RCC predictions, chlorophyll a did not increase in concentration with catchment size. As the RCC predicts, fish and macroinvertebrate diversity increased with catchment size. Shredding and collecting macroinvertebrate taxa supported RCC predictions, respectively decreasing and increasing in composition as catchment area increased. Herbivorous and predaceous fish did not follow RCC predictions; however, surface-water column feeding fish were abundant at all sites as predicted. We hypothesize some predictions of the RCC were not supported in headwater reaches of this system due to regional differences in watershed characteristics and altered resource availability due to land use surrounding sampling sites

    Exploring the Spectral Space of Low Redshift QSOs

    Full text link
    The Karhunen-Loeve (KL) transform can compactly represent the information contained in large, complex datasets, cleanly eliminating noise from the data and identifying elements of the dataset with extreme or inconsistent characteristics. We develop techniques to apply the KL transform to the 4000-5700A region of 9,800 QSO spectra with z < 0.619 from the SDSS archive. Up to 200 eigenspectra are needed to fully reconstruct the spectra in this sample to the limit of their signal/noise. We propose a simple formula for selecting the optimum number of eigenspectra to use to reconstruct any given spectrum, based on the signal/noise of the spectrum, but validated by formal cross-validation tests. We show that such reconstructions can boost the effective signal/noise of the observations by a factor of 6 as well as fill in gaps in the data. The improved signal/noise of the resulting set will allow for better measurement and analysis of these spectra. The distribution of the QSO spectra within the eigenspace identifies regions of enhanced density of interesting subclasses, such as Narrow Line Seyfert 1s (NLS1s). The weightings, as well as the inability of the eigenspectra to fit some of the objects, also identifies "outliers," which may be objects that are not valid members of the sample or objects with rare or unique properties. We identify 48 spectra from the sample that show no broad emission lines, 21 objects with unusual [O III] emission line properties, and 9 objects with peculiar H-beta emission line profiles. We also use this technique to identify a binary supermassive black hole candidate. We provide the eigenspectra and the reconstructed spectra of the QSO sample.Comment: 34 pages, 14 figures, revised version resubmitted to the Astronomical Journa

    Kepler Input Catalog: Photometric Calibration and Stellar Classification

    Full text link
    We describe the photometric calibration and stellar classification methods used to produce the Kepler Input Catalog (KIC). The KIC is a catalog containing photometric and physical data for sources in the Kepler Mission field of view; it is used by the mission to select optimal targets. We derived atmospheric extinction corrections from hourly observations of secondary standard fields within the Kepler field of view. Repeatability of absolute photometry for stars brighter than magnitude 15 is typically 2%. We estimated stellar parameters Teff, log(g), log (Z), E_{B-V} using Bayesian posterior probability maximization to match observed colors to Castelli stellar atmosphere models. We applied Bayesian priors describing the distribution of solar-neighborhood stars in the color-magnitude diagram (CMD), in log (Z)$, and in height above the galactic plane. Comparisons with samples of stars classified by other means indicate that in most regions of the CMD, our classifications are reliable within about +/- 200 K and +/- 0.4 dex in log (g). It is difficult to assess the reliability of our log(Z) estimates, but there is reason to suspect that it is poor, particularly at extreme Teff. Of great importance for the Kepler Mission, for Teff <= 5400 K, the distinction between main-sequence stars and giants has proved to be reliable with better than 98% confidence. The KIC is available through the MAST data archive.Comment: 77 pages, 12 figures, 1 Table. Accepted by Astronomical Journal 24 July 201

    Constraining the LRG Halo Occupation Distribution using Counts-in-Cylinders

    Full text link
    The low number density of the Sloan Digital Sky Survey (SDSS) Luminous Red Galaxies (LRGs) suggests that LRGs occupying the same dark matter halo can be separated from pairs occupying distinct dark matter halos with high fidelity. We present a new technique, Counts-in-Cylinders (CiC), to constrain the parameters of the satellite contribution to the LRG Halo-Occupation Distribution (HOD). For a fiber collision-corrected SDSS spectroscopic LRG subsample at 0.16 < z < 0.36, we find the CiC multiplicity function is fit by a halo model where the average number of satellites in a halo of mass M is = ((M - Mcut)/M1)^alpha with Mcut = 5.0 +1.5/-1.3 (+2.9/-2.6) X 10^13 Msun, M1 = 4.95 +0.37/-0.26 (+0.79/-0.53) X 10^14 Msun, and alpha = 1.035 +0.10/-0.17 (+0.24/-0.31) at the 68% and 95% confidence levels using a WMAP3 cosmology and z=0.2 halo catalog. Our method tightly constrains the fraction of LRGs that are satellite galaxies, 6.36 +0.38/-0.39, and the combination Mcut/10^{14} Msun + alpha = 1.53 +0.08/-0.09 at the 95% confidence level. We also find that mocks based on a halo catalog produced by a spherical overdensity (SO) finder reproduce both the measured CiC multiplicity function and the projected correlation function, while mocks based on a Friends-of-Friends (FoF) halo catalog has a deficit of close pairs at ~1 Mpc/h separations. Because the CiC method relies on higher order statistics of close pairs, it is robust to the choice of halo finder. In a companion paper we will apply this technique to optimize Finger-of-God (FOG) compression to eliminate the 1-halo contribution to the LRG power spectrum.Comment: 40 pages, 9 figures, submitted to Astrophysical Journa
    corecore