72 research outputs found
Progress report on the ultra heavy cosmic ray experiment (AO178)
The Ultra Heavy Cosmic Ray Experiment (UHCRE) is based on a modular array of 192 side-viewing solid state nuclear track detector stacks. These stacks were mounted in sets of four in 48 pressure vessels employing sixteen peripheral Long Duration Exposure Facility (LDEF) trays. The extended duration of the LDEF mission has resulted in a greatly enhanced scientific yield from the UHCRE. The geometry factor for high energy cosmic ray nuclei, allowing for Earth shadowing, was 30 sq m-sr, giving a total exposure factor of 170 sq m-sr-y at an orbital inclination of 28.4 degrees. Scanning results indicate that about 3000 cosmic ray nuclei in the charge region with Z greater than 65 were collected. This sample is more than ten times the current world data in the field (taken to be the data set from the HEAO-3 mission plus that from the Ariel-6 mission) and is sufficient to provide the world's first statistically significant sample of actinide (Z greater than 88) cosmic rays. Results to date are presented including details of ultra-heavy cosmic ray nuclei, analysis of pre-flight and post-flight calibration events and details of track response in the context of detector temperature history. The integrated effect of all temperature and age related latent track variations cause a maximum charge shift of +/- 0.8 e for uranium and +/- 0.6 e for the platinum-lead group. The precision of charge assignment as a function of energy is derived and evidence for remarkably good charge resolution achieved in the UHCRE is considered. Astrophysical implications of the UHCRE charge spectrum are discussed
The LDEF ultra heavy cosmic ray experiment
The LDEF Ultra Heavy Cosmic Ray Experiment (UHCRE) used 16 side viewing LDEF trays giving a total geometry factor for high energy cosmic rays of 30 sq m sr. The total exposure factor was 170 sq m sr y. The experiment is based on a modular array of 192 solid state nuclear track detector stacks, mounted in sets of four in 48 pressure vessels. The extended duration of the LDEF mission has resulted in a greatly enhanced potential scientific yield from the UHCRE. Initial scanning results indicate that at least 1800 cosmic ray nuclei with Z greater than 65 were collected, including the world's first statistically significant sample of actinides. Post flight work to date and the current status of the experiment are reviewed
An account of a flare related shock event recorded by the energetic particle detector EPONA of the Giotto spacecraft during September 1985 (STIP interval 18)
The Energetic Particle Detector EPONA flown on the Giotto Mission to Halley's Comet was designed to measure electrons, protons, and heavier ions (E greater that 20 keV) in the Comet Halley environment and during the Cruise Phase of the mission (EPONA switch on: 22 August 1985 - Halley encounter: 13 March 1986). In September 1985 (STIP Interval XVIII) a well defined shock event was recorded at EPONA in association with a sequence of solar flares and a preliminary account of this event is presented
Early results from the ultra heavy cosmic ray experiment
Data extraction and analysis of the LDEF Ultra Heavy Cosmic Ray Experiment is continuing. Almost twice the pre LDEF world sample has been investigated and some details of the charge spectrum in the region from Z approximately 70 up to and including the actinides are presented. The early results indicate r process enhancement over solar system source abundances
Recommended from our members
Combining macula clinical signs and patient characteristics for age-related macular degeneration diagnosis: a machine learning approach
Background: To investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) βblack-boxβ approaches, for automated diagnosis of Age-related Macular Degeneration (AMD).
Methods: Data from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patientsβ attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/ pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance.
Results: Study population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in cliniciansβ decision pathways to diagnose AMD. C
Conclusions: Both black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support
Modeling the Spread of Methicillin-Resistant Staphylococcus aureus in Nursing Homes for Elderly
Methicillin-resistant Staphylococcus aureus (MRSA) is endemic in many hospital settings, including nursing homes. It is an important nosocomial pathogen that causes mortality and an economic burden to patients, hospitals, and the community. The epidemiology of the bacteria in nursing homes is both hospital- and community-like. Transmission occurs via hands of health care workers (HCWs) and direct contacts among residents during social activities. In this work, mathematical modeling in both deterministic and stochastic frameworks is used to study dissemination of MRSA among residents and HCWs, persistence and prevalence of MRSA in a population, and possible means of controlling the spread of this pathogen in nursing homes. The model predicts that: without strict screening and decolonization of colonized individuals at admission, MRSA may persist; decolonization of colonized residents, improving hand hygiene in both residents and HCWs, reducing the duration of contamination of HCWs, and decreasing the residentβΆstaff ratio are possible control strategies; the mean time that a resident remains susceptible since admission may be prolonged by screening and decolonization treatment in colonized individuals; in the stochastic framework, the total number of colonized residents varies and may increase when the admission of colonized residents, the duration of colonization, the average number of contacts among residents, or the average number of contacts that each resident requires from HCWs increases; an introduction of a colonized individual into an MRSA-free nursing home has a much higher probability of leading to a major outbreak taking off than an introduction of a contaminated HCW
Integration of face and voice during emotion perception : is there anything gained for the perceptual system beyond stimulus modality redundancy?
Proteomic Analysis of Fusarium solani Isolated from the Asian Longhorned Beetle, Anoplophora glabripennis
Wood is a highly intractable food source, yet many insects successfully colonize and thrive in this challenging niche. Overcoming the lignin barrier of wood is a key challenge in nutrient acquisition, but full depolymerization of intact lignin polymers has only been conclusively demonstrated in fungi and is not known to occur by enzymes produced by insects or bacteria. Previous research validated that lignocellulose and hemicellulose degradation occur within the gut of the wood boring insect, Anoplophora glabripennis (Asian longhorned beetle), and that a fungal species, Fusarium solani (ATCC MYA 4552), is consistently associated with the larval stage. While the nature of this relationship is unresolved, we sought to assess this fungal isolate's ability to degrade lignocellulose and cell wall polysaccharides and to extract nutrients from woody tissue. This gut-derived fungal isolate was inoculated onto a wood-based substrate and shotgun proteomics using Multidimensional Protein Identification Technology (MudPIT) was employed to identify 400 expressed proteins. Through this approach, we detected proteins responsible for plant cell wall polysaccharide degradation, including proteins belonging to 28 glycosyl hydrolase families and several cutinases, esterases, lipases, pectate lyases, and polysaccharide deacetylases. Proteinases with broad substrate specificities and ureases were observed, indicating that this isolate has the capability to digest plant cell wall proteins and recycle nitrogenous waste under periods of nutrient limitation. Additionally, several laccases, peroxidases, and enzymes involved in extracellular hydrogen peroxide production previously implicated in lignin depolymerization were detected. In vitro biochemical assays were conducted to corroborate MudPIT results and confirmed that cellulases, glycosyl hydrolases, xylanases, laccases, and Mn- independent peroxidases were active in culture; however, lignin- and Mn- dependent peroxidase activities were not detected While little is known about the role of filamentous fungi and their associations with insects, these findings suggest that this isolate has the endogenous potential to degrade lignocellulose and extract nutrients from woody tissue
Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel
A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. Β© 2014 Macmillan Publishers Limited. All rights reserved
- β¦