14,972 research outputs found

    Evidence for Environmental Changes in the Submillimeter Dust Opacity

    Get PDF
    The submillimeter opacity of dust in the diffuse interstellar medium (ISM) in the Galactic plane has been quantified using a pixel-by-pixel correlation of images of continuum emission with a proxy for column density. We used multi-wavelength continuum data: three Balloon-borne Large Aperture Submillimeter Telescope bands at 250, 350, and 500 μm and one IRAS band at 100 μm. The proxy is the near-infrared color excess, E(J – K_s), obtained from the Two Micron All Sky Survey. Based on observations of stars, we show how well this color excess is correlated with the total hydrogen column density for regions of moderate extinction. The ratio of emission to column density, the emissivity, is then known from the correlations, as a function of frequency. The spectral distribution of this emissivity can be fit by a modified blackbody, whence the characteristic dust temperature T and the desired opacity σ_e(1200) at 1200 GHz or 250 μm can be obtained. We have analyzed 14 regions near the Galactic plane toward the Vela molecular cloud, mostly selected to avoid regions of high column density (N_H > 10^(22) cm^(–2)) and small enough to ensure a uniform dust temperature. We find σ_e(1200) is typically (2-4) × 10^(–25) cm^2 H^(–1) and thus about 2-4 times larger than the average value in the local high Galactic latitude diffuse atomic ISM. This is strong evidence for grain evolution. There is a range in total power per H nucleon absorbed (and re-radiated) by the dust, reflecting changes in the strength of the interstellar radiation field and/or the dust absorption opacity. These changes in emission opacity and power affect the equilibrium T, which is typically 15 K, colder than at high latitudes. Our analysis extends, to higher opacity and lower temperature, the trend of increasing σ_e(1200) with decreasing T that was found at high latitudes. The recognition of changes in the emission opacity raises a cautionary flag because all column densities deduced from dust emission maps, and the masses of compact structures within them, depend inversely on the value adopted

    A low noise, high thermal stability, 0.1 K test facility for the Planck HFI bolometers

    Get PDF
    We are developing a facility which will be used to characterize the bolometric detectors for Planck, an ESA mission to investigate the Cosmic Microwave Background. The bolometers operate at 0.1 K, employing neutron-transmutation doped (NTD) Ge thermistors with resistances of several megohms to achieve NEPs~1×10^(–17) W Hz^(–1/2). Characterization of the intrinsic noise of the bolometers at frequencies as low as 0.010 Hz dictates a test apparatus thermal stability of 40 nK Hz^(–1/2) to that frequency. This temperature stability is achieved via a multi-stage isolation and control geometry with high resolution thermometry implemented with NTD Ge thermistors, JFET source followers, and dedicated lock-in amplifiers. The test facility accommodates 24 channels of differential signal readout, for measurement of bolometer V(I) characteristics and intrinsic noise. The test facility also provides for modulated radiation in the submillimeter band incident on the bolometers, for measurement of the optical speed-of-response; this illumination can be reduced below detectable limits without interrupting cryogenic operation. A commercial Oxford Instruments dilution refrigerator provides the cryogenic environment for the test facility

    Parametric motion control of robotic arms: A biologically based approach using neural networks

    Get PDF
    A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements

    A Recursive Algorithm for Computing Inferences in Imprecise Markov Chains

    Full text link
    We present an algorithm that can efficiently compute a broad class of inferences for discrete-time imprecise Markov chains, a generalised type of Markov chains that allows one to take into account partially specified probabilities and other types of model uncertainty. The class of inferences that we consider contains, as special cases, tight lower and upper bounds on expected hitting times, on hitting probabilities and on expectations of functions that are a sum or product of simpler ones. Our algorithm exploits the specific structure that is inherent in all these inferences: they admit a general recursive decomposition. This allows us to achieve a computational complexity that scales linearly in the number of time points on which the inference depends, instead of the exponential scaling that is typical for a naive approach

    The Herschel Space Observatory view of dust in M81

    Get PDF
    We use Herschel Space Observatory data to place observational constraints on the peak and Rayleigh-Jeans slope of dust emission observed at 70−500 μm in the nearby spiral galaxy M81. We find that the ratios of wave bands between 160 and 500 μm are primarily dependent on radius but that the ratio of 70 to 160 μm emission shows no clear dependence on surface brightness or radius. These results along with analyses of the spectral energy distributions imply that the 160−500 μm emission traces 15−30 K dust heated by evolved stars in the bulge and disc whereas the 70 μm emission includes dust heated by the active galactic nucleus and young stars in star forming regions

    The Phase Diagram and Spectrum of Gauge-Fixed Abelian Lattice Gauge Theory

    Get PDF
    We consider a lattice discretization of a covariantly gauge-fixed abelian gauge theory. The gauge fixing is part of the action defining the theory, and we study the phase diagram in detail. As there is no BRST symmetry on the lattice, counterterms are needed, and we construct those explicitly. We show that the proper adjustment of these counterterms drives the theory to a new type of phase transition, at which we recover a continuum theory of (free) photons. We present both numerical and (one-loop) perturbative results, and show that they are in good agreement near this phase transition. Since perturbation theory plays an important role, it is important to choose a discretization of the gauge-fixing action such that lattice perturbation theory is valid. Indeed, we find numerical evidence that lattice actions not satisfying this requirement do not lead to the desired continuum limit. While we do not consider fermions here, we argue that our results, in combination with previous work, provide very strong evidence that this new phase transition can be used to define abelian lattice chiral gauge theories.Comment: 42 pages, 30 figure

    Far-field noise and internal modes from a ducted propeller at simulated aircraft takeoff conditions

    Get PDF
    The ducted propeller offers structural and acoustic benefits typical of conventional turbofan engines while retaining much of the aeroacoustic benefits of the unducted propeller. A model Advanced Ducted Propeller (ADP) was tested in the NASA Lewis Low-Speed Anechoic Wind Tunnel at a simulated takeoff velocity of Mach 0.2. The ADP model was designed and manufactured by the Pratt and Whitney Division of United Technologies. The 16-blade rotor ADP was tested with 22- and 40-vane stators to achieve cut-on and cut-off criterion with respect to propagation of the fundamental rotor-stator interaction tone. Additional test parameters included three inlet lengths, three nozzle sizes, two spinner configurations, and two rotor rub strip configurations. The model was tested over a range of rotor blade setting angles and propeller axis angles-of-attack. Acoustic data were taken with a sideline translating microphone probe and with a unique inlet microphone probe which identified inlet rotating acoustic modes. The beneficial acoustic effects of cut-off were clearly demonstrated. A 5 dB fundamental tone reduction was associated with the long inlet and 40-vane sector, which may relate to inlet duct geometry. The fundamental tone level was essentially unaffected by propeller axis angle-of-attack at rotor speeds of at least 96 percent design

    Monoclonal gammopathy after intense induction immunosuppression in renal transplant patients

    Get PDF
    Objectives. Incidence and risk factors of post-transplant monoclonal gammopathy were studied in renal transplant patients who received their grafts between 1982 and 1992 (n=390 grafts). Immunoelectrophoresis was performed at annual intervals after transplantation. Results. Forty-six cases of clonal gammopathy were detected: 35 monoclonal, 11 bi- or triclonal, with a predominance of IgG and K light-chain subtypes (IgG, 39; IgA, 3; IgM, 4; K, 35; λ, 19). Gammopathy was transient in 17 patients (37%). The 5-year cumulative incidence of gammopathy was 10.7%, much higher than expected for a group of similar age from the general population. Thirty of the 46 gammopathies appeared within the first 2 years of transplantation. Gammopathy never progressed to multiple myeloma during follow-up (median 1 year; (range 0-10)); one patient subsequently developed Kaposi sarcoma. The 2-year incidence of gammopathy was much higher in patients transplanted in 1989-1991 (23/142) than in 1982-1988 (7/248) (P<0.0001). This coincided with the use of quadruple induction immunosuppression (cyclosporin A+azathioprine+prednisone plus either ATG-Fresenius (ATG-F) or OKT3) since 1989. The risk for acquiring gammopathy within 2 years of transplantation was 14.7% (95% CI 9.2, 20.3%) in patients receiving quadruple induction therapy, but only 3.0% (CI 1.2, 6.1%) without such therapy (P<0.0001). The risk for patients receiving quadruple immunosuppression with OKT3 was 24.5%, significantly greater than with ATG-F (11.8%, P<0.05). Discriminant analysis revealed that the type of immunosuppression, but not age or year of transplantation, were independent risk factors for gammopathy. Conclusions. Monoclonal gammopathy frequently occurs after renal transplantation. Risks are higher for patients receiving quadruple induction immunosuppression, particularly if it includes OKT3. Follow-up of these patients is warranted for the early detection of malignant transformatio

    Assessing the reproducibility of discriminant function analyses.

    Get PDF
    Data are the foundation of empirical research, yet all too often the datasets underlying published papers are unavailable, incorrect, or poorly curated. This is a serious issue, because future researchers are then unable to validate published results or reuse data to explore new ideas and hypotheses. Even if data files are securely stored and accessible, they must also be accompanied by accurate labels and identifiers. To assess how often problems with metadata or data curation affect the reproducibility of published results, we attempted to reproduce Discriminant Function Analyses (DFAs) from the field of organismal biology. DFA is a commonly used statistical analysis that has changed little since its inception almost eight decades ago, and therefore provides an opportunity to test reproducibility among datasets of varying ages. Out of 100 papers we initially surveyed, fourteen were excluded because they did not present the common types of quantitative result from their DFA or gave insufficient details of their DFA. Of the remaining 86 datasets, there were 15 cases for which we were unable to confidently relate the dataset we received to the one used in the published analysis. The reasons ranged from incomprehensible or absent variable labels, the DFA being performed on an unspecified subset of the data, or the dataset we received being incomplete. We focused on reproducing three common summary statistics from DFAs: the percent variance explained, the percentage correctly assigned and the largest discriminant function coefficient. The reproducibility of the first two was fairly high (20 of 26, and 44 of 60 datasets, respectively), whereas our success rate with the discriminant function coefficients was lower (15 of 26 datasets). When considering all three summary statistics, we were able to completely reproduce 46 (65%) of 71 datasets. While our results show that a majority of studies are reproducible, they highlight the fact that many studies still are not the carefully curated research that the scientific community and public expects
    corecore