777 research outputs found

    The bactericidal activity of moxifloxacin in patients with pulmonary tuberculosis

    Get PDF
    Patients in whom acid-fast bacilli smear-positive pulmonary tuberculosis was newly diagnosed were randomized to receive 400 mg moxifloxacin, 300 mg isonaizid, or 600 mg rifampin daily for 5 days. Sixteen-hour overnight sputa collections were made for the 2 days before and for 5 days of monotherapy. Bactericidal activity was estimated by the time taken to kill 50% of viable bacilli (vt(50)) and the fall in sputum viable count during the first 2 days designated as the early bactericidal activity (EBA). The mean vt(50) of moxifloxacin was 0.88 days (95% confidence interval [Cl], 0.43-1.33 days) and the mean EBA was 0.53 (95% CI 0.28-0.79). For the isoniazid group, the mean vt(50) was 0.46 days (95% Cl, 0.31-0.61 days) and the mean EBA was 0.77 (95% Cl, 0.54-1.00). For rifampin, the mean vt(50) was 0.71 days (95% Cl, 0.48-0.95 days) and the mean EBA was 0.28 (95% Cl, 0.15-0.41). Using the EBA method, isoniazid was significantly more active than rifampin (p < 0.01) but not moxifloxacin. Using the vt(50) method, isoniazid was more active than both rifampin and moxifloxacin (p = 0.03). Moxifloxacin has an activity similar to rifampin in human subjects with pulmonary tuberculosis, suggesting that it should undergo further assessment as part of a short course regimen for the treatment of drug-susceptible tuberculosis

    Evolving Spatially Aggregated Features from Satellite Imagery for Regional Modeling

    Full text link
    Satellite imagery and remote sensing provide explanatory variables at relatively high resolutions for modeling geospatial phenomena, yet regional summaries are often desirable for analysis and actionable insight. In this paper, we propose a novel method of inducing spatial aggregations as a component of the machine learning process, yielding regional model features whose construction is driven by model prediction performance rather than prior assumptions. Our results demonstrate that Genetic Programming is particularly well suited to this type of feature construction because it can automatically synthesize appropriate aggregations, as well as better incorporate them into predictive models compared to other regression methods we tested. In our experiments we consider a specific problem instance and real-world dataset relevant to predicting snow properties in high-mountain Asia

    The DICE calibration project: design, characterization, and first results

    Full text link
    We describe the design, operation, and first results of a photometric calibration project, called DICE (Direct Illumination Calibration Experiment), aiming at achieving precise instrumental calibration of optical telescopes. The heart of DICE is an illumination device composed of 24 narrow-spectrum, high-intensity, light-emitting diodes (LED) chosen to cover the ultraviolet-to-near-infrared spectral range. It implements a point-like source placed at a finite distance from the telescope entrance pupil, yielding a flat field illumination that covers the entire field of view of the imager. The purpose of this system is to perform a lightweight routine monitoring of the imager passbands with a precision better than 5 per-mil on the relative passband normalisations and about 3{\AA} on the filter cutoff positions. The light source is calibrated on a spectrophotometric bench. As our fundamental metrology standard, we use a photodiode calibrated at NIST. The radiant intensity of each beam is mapped, and spectra are measured for each LED. All measurements are conducted at temperatures ranging from 0{\deg}C to 25{\deg}C in order to study the temperature dependence of the system. The photometric and spectroscopic measurements are combined into a model that predicts the spectral intensity of the source as a function of temperature. We find that the calibration beams are stable at the 10−410^{-4} level -- after taking the slight temperature dependence of the LED emission properties into account. We show that the spectral intensity of the source can be characterised with a precision of 3{\AA} in wavelength. In flux, we reach an accuracy of about 0.2-0.5% depending on how we understand the off-diagonal terms of the error budget affecting the calibration of the NIST photodiode. With a routine 60-mn calibration program, the apparatus is able to constrain the passbands at the targeted precision levels.Comment: 25 pages, 27 figures, accepted for publication in A&

    Some Experiments on the influence of Problem Hardness in Morphological Development based Learning of Neural Controllers

    Get PDF
    Natural beings undergo a morphological development process of their bodies while they are learning and adapting to the environments they face from infancy to adulthood. In fact, this is the period where the most important learning pro-cesses, those that will support learning as adults, will take place. However, in artificial systems, this interaction between morphological development and learning, and its possible advantages, have seldom been considered. In this line, this paper seeks to provide some insights into how morphological development can be harnessed in order to facilitate learning in em-bodied systems facing tasks or domains that are hard to learn. In particular, here we will concentrate on whether morphological development can really provide any advantage when learning complex tasks and whether its relevance towards learning in-creases as tasks become harder. To this end, we present the results of some initial experiments on the application of morpho-logical development to learning to walk in three cases, that of a quadruped, a hexapod and that of an octopod. These results seem to confirm that as task learning difficulty increases the application of morphological development to learning becomes more advantageous.Comment: 10 pages, 4 figure

    Dynamical modeling of collective behavior from pigeon flight data: flock cohesion and dispersion

    Get PDF
    Several models of flocking have been promoted based on simulations with qualitatively naturalistic behavior. In this paper we provide the first direct application of computational modeling methods to infer flocking behavior from experimental field data. We show that this approach is able to infer general rules for interaction, or lack of interaction, among members of a flock or, more generally, any community. Using experimental field measurements of homing pigeons in flight we demonstrate the existence of a basic distance dependent attraction/repulsion relationship and show that this rule is sufficient to explain collective behavior observed in nature. Positional data of individuals over time are used as input data to a computational algorithm capable of building complex nonlinear functions that can represent the system behavior. Topological nearest neighbor interactions are considered to characterize the components within this model. The efficacy of this method is demonstrated with simulated noisy data generated from the classical (two dimensional) Vicsek model. When applied to experimental data from homing pigeon flights we show that the more complex three dimensional models are capable of predicting and simulating trajectories, as well as exhibiting realistic collective dynamics. The simulations of the reconstructed models are used to extract properties of the collective behavior in pigeons, and how it is affected by changing the initial conditions of the system. Our results demonstrate that this approach may be applied to construct models capable of simulating trajectories and collective dynamics using experimental field measurements of herd movement. From these models, the behavior of the individual agents (animals) may be inferred

    Optics-less smart sensors and a possible mechanism of cutaneous vision in nature

    Full text link
    Optics-less cutaneous (skin) vision is not rare among living organisms, though its mechanisms and capabilities have not been thoroughly investigated. This paper demonstrates, using methods from statistical parameter estimation theory and numerical simulations, that an array of bare sensors with a natural cosine-law angular sensitivity arranged on a flat or curved surface has the ability to perform imaging tasks without any optics at all. The working principle of this type of optics-less sensor and the model developed here for determining sensor performance may be used to shed light upon possible mechanisms and capabilities of cutaneous vision in nature

    Calibration method to improve transfer from simulation to quadruped robots

    Get PDF
    Using passive compliance in robotic locomotion has been seen as a cheap and straightforward way of increasing the performance in energy consumption and robustness. However, the control for such systems remains quite challenging when using traditional robotic techniques. The progress in machine learning opens a horizon of new possibilities in this direction but the training methods are generally too long and laborious to be conducted on a real robot platform. On the other hand, learning a control policy in simulation also raises a lot of complication in the transfer. In this paper, we designed a cheap quadruped robot and detail a calibration method to optimize a simulation model in order to facilitate the transfer of parametric motor primitives. We present results validating the transfer of Central Pattern Generators (CPG) learned in simulation to the robot which already give positive insights on the validity of this method

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury

    Initial Hubble Diagram Results from the Nearby Supernova Factory

    Full text link
    The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagram (relative distance vs. redshift) and a description of some analyses using this rich dataset.Comment: Short version of proceedings for ICHEP08, Philadelphia PA, July 2008; see v1 for full-length versio

    Standardizing Type Ia Supernova Absolute Magnitudes Using Gaussian Process Data Regression

    Full text link
    We present a novel class of models for Type Ia supernova time-evolving spectral energy distributions (SED) and absolute magnitudes: they are each modeled as stochastic functions described by Gaussian processes. The values of the SED and absolute magnitudes are defined through well-defined regression prescriptions, so that data directly inform the models. As a proof of concept, we implement a model for synthetic photometry built from the spectrophotometric time series from the Nearby Supernova Factory. Absolute magnitudes at peak BB brightness are calibrated to 0.13 mag in the gg-band and to as low as 0.09 mag in the z=0.25z=0.25 blueshifted ii-band, where the dispersion includes contributions from measurement uncertainties and peculiar velocities. The methodology can be applied to spectrophotometric time series of supernovae that span a range of redshifts to simultaneously standardize supernovae together with fitting cosmological parameters.Comment: 47 pages, 15 figures, accepted for publication by Astrophysical Journa
    • 

    corecore