70 research outputs found
Sediment Management for Southern California Mountains, Coastal Plains, and Shoreline
During FY77, with financial support from Los Angeles County,
U. S. Geological Survey, Orange County, U. S. Army Corps of Engineers,
and discretionary funding provided by a grant from the Ford Foundation,
substantial progress was made at EQL and SPL in achieving the objectives
of the initial Planning and Assessment Phase of the CIT/SIO
Sediment Management Project. The current timetable for completion
of this phase is June 1978.
This report briefly describes the project status including
general administration, special activities, and technical work
An experimental and numerical study of particle nucleation and growth during low-pressure thermal decomposition of silane
Abstract This paper discusses an experimental and numerical study of the nucleation and growth of particles during low-pressure (∼1:0 Torr) thermal decomposition of silane (SiH 4 ). A Particle Beam Mass Spectrometer was used to measure particle size distributions in a parallel-plate showerhead-type semiconductor reactor. An aerosol dynamics moment-type formulation coupled with a chemically reacting uid ow model was used to predict particle concentration, size, and transport in the reactor. Particle nucleation kinetics via a sequence of chemical clustering reactions among silicon hydride molecular clusters, growth by heterogeneous chemical reactions on particle surfaces and coagulation, and transport by convection, di usion, and thermophoresis were included in the model. The e ect of pressure, temperature, ow residence time, carrier gas, and silane concentration were examined under conditions typically used for low-pressure (∼1 Torr) thermal chemical vapor deposition of polysilicon. The numerical simulations predict that several pathways involving linear and polycyclic silicon hydride molecules result in formation of particle "nuclei," which subsequently grow by heterogeneous reactions on the particle surfaces. The model is in good agreement with observations for the pressure and temperature at which particle formation begins, particle sizes and growth rates, and relative particle concentrations at various process conditions. A simpliÿed, computationally inexpensive, quasi-coupled modeling approach is suggested as an engineering tool for process equipment design and contamination control during low-pressure thermal silicon deposition.
Recommended from our members
SHRINE: Enabling Nationally Scalable Multi-Site Disease Studies
Results of medical research studies are often contradictory or cannot be reproduced. One reason is that there may not be enough patient subjects available for observation for a long enough time period. Another reason is that patient populations may vary considerably with respect to geographic and demographic boundaries thus limiting how broadly the results apply. Even when similar patient populations are pooled together from multiple locations, differences in medical treatment and record systems can limit which outcome measures can be commonly analyzed. In total, these differences in medical research settings can lead to differing conclusions or can even prevent some studies from starting. We thus sought to create a patient research system that could aggregate as many patient observations as possible from a large number of hospitals in a uniform way. We call this system the ‘Shared Health Research Information Network’, with the following properties: (1) reuse electronic health data from everyday clinical care for research purposes, (2) respect patient privacy and hospital autonomy, (3) aggregate patient populations across many hospitals to achieve statistically significant sample sizes that can be validated independently of a single research setting, (4) harmonize the observation facts recorded at each institution such that queries can be made across many hospitals in parallel, (5) scale to regional and national collaborations. The purpose of this report is to provide open source software for multi-site clinical studies and to report on early uses of this application. At this time SHRINE implementations have been used for multi-site studies of autism co-morbidity, juvenile idiopathic arthritis, peripartum cardiomyopathy, colorectal cancer, diabetes, and others. The wide range of study objectives and growing adoption suggest that SHRINE may be applicable beyond the research uses and participating hospitals named in this report
Vapor−Wall Deposition in Chambers: Theoretical Considerations
In order to constrain the effects of vapor–wall deposition on measured secondary organic aerosol (SOA) yields in laboratory chambers, researchers recently varied the seed aerosol surface area in toluene oxidation and observed a clear increase in the SOA yield with increasing seed surface area (Zhang, X.; et al. Proc. Natl. Acad. Sci. U.S.A. 2014, 111, 5802). Using a coupled vapor–particle dynamics model, we examine the extent to which this increase is the result of vapor–wall deposition versus kinetic limitations arising from imperfect accommodation of organic species into the particle phase. We show that a seed surface area dependence of the SOA yield is present only when condensation of vapors onto particles is kinetically limited. The existence of kinetic limitation can be predicted by comparing the characteristic time scales of gas-phase reaction, vapor–wall deposition, and gas–particle equilibration. The gas–particle equilibration time scale depends on the gas–particle accommodation coefficient α_p. Regardless of the extent of kinetic limitation, vapor–wall deposition depresses the SOA yield from that in its absence since vapor molecules that might otherwise condense on particles deposit on the walls. To accurately extrapolate chamber-derived yields to atmospheric conditions, both vapor–wall deposition and kinetic limitations must be taken into account
Expression Atlas update--a database of gene and transcript expression from microarray- and sequencing-based functional genomics experiments.
Expression Atlas (http://www.ebi.ac.uk/gxa) is a value-added database providing information about gene, protein and splice variant expression in different cell types, organism parts, developmental stages, diseases and other biological and experimental conditions. The database consists of selected high-quality microarray and RNA-sequencing experiments from ArrayExpress that have been manually curated, annotated with Experimental Factor Ontology terms and processed using standardized microarray and RNA-sequencing analysis methods. The new version of Expression Atlas introduces the concept of 'baseline' expression, i.e. gene and splice variant abundance levels in healthy or untreated conditions, such as tissues or cell types. Differential gene expression data benefit from an in-depth curation of experimental intent, resulting in biologically meaningful 'contrasts', i.e. instances of differential pairwise comparisons between two sets of biological replicates. Other novel aspects of Expression Atlas are its strict quality control of raw experimental data, up-to-date RNA-sequencing analysis methods, expression data at the level of gene sets, as well as genes and a more powerful search interface designed to maximize the biological value provided to the user
Whole genome identification of Mycobacterium tuberculosis vaccine candidates by comprehensive data mining and bioinformatic analyses
<p>Abstract</p> <p>Background</p> <p><it>Mycobacterium tuberculosis</it>, the causative agent of tuberculosis (TB), infects ~8 million annually culminating in ~2 million deaths. Moreover, about one third of the population is latently infected, 10% of which develop disease during lifetime. Current approved prophylactic TB vaccines (BCG and derivatives thereof) are of variable efficiency in adult protection against pulmonary TB (0%–80%), and directed essentially against early phase infection.</p> <p>Methods</p> <p>A genome-scale dataset was constructed by analyzing published data of: (1) global gene expression studies under conditions which simulate intra-macrophage stress, dormancy, persistence and/or reactivation; (2) cellular and humoral immunity, and vaccine potential. This information was compiled along with revised annotation/bioinformatic characterization of selected gene products and <it>in silico </it>mapping of T-cell epitopes. Protocols for scoring, ranking and prioritization of the antigens were developed and applied.</p> <p>Results</p> <p>Cross-matching of literature and <it>in silico</it>-derived data, in conjunction with the prioritization scheme and biological rationale, allowed for selection of 189 putative vaccine candidates from the entire genome. Within the 189 set, the relative distribution of antigens in 3 functional categories differs significantly from their distribution in the whole genome, with reduction in the Conserved hypothetical category (due to improved annotation) and enrichment in Lipid and in Virulence categories. Other prominent representatives in the 189 set are the PE/PPE proteins; iron sequestration, nitroreductases and proteases, all within the Intermediary metabolism and respiration category; ESX secretion systems, resuscitation promoting factors and lipoproteins, all within the Cell wall category. Application of a ranking scheme based on qualitative and quantitative scores, resulted in a list of 45 best-scoring antigens, of which: 74% belong to the dormancy/reactivation/resuscitation classes; 30% belong to the Cell wall category; 13% are classical vaccine candidates; 9% are categorized Conserved hypotheticals, all potentially very potent T-cell antigens.</p> <p>Conclusion</p> <p>The comprehensive literature and <it>in silico</it>-based analyses allowed for the selection of a repertoire of 189 vaccine candidates, out of the whole-genome 3989 ORF products. This repertoire, which was ranked to generate a list of 45 top-hits antigens, is a platform for selection of genes covering all stages of <it>M. tuberculosis </it>infection, to be incorporated in rBCG or subunit-based vaccines.</p
The Human Phenotype Ontology in 2024: phenotypes around the world.
The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs
Use of quantitative molecular diagnostic methods to identify causes of diarrhoea in children: a reanalysis of the GEMS case-control study.
BACKGROUND: Diarrhoea is the second leading cause of mortality in children worldwide, but establishing the cause can be complicated by diverse diagnostic approaches and varying test characteristics. We used quantitative molecular diagnostic methods to reassess causes of diarrhoea in the Global Enteric Multicenter Study (GEMS). METHODS: GEMS was a study of moderate to severe diarrhoea in children younger than 5 years in Africa and Asia. We used quantitative real-time PCR (qPCR) to test for 32 enteropathogens in stool samples from cases and matched asymptomatic controls from GEMS, and compared pathogen-specific attributable incidences with those found with the original GEMS microbiological methods, including culture, EIA, and reverse-transcriptase PCR. We calculated revised pathogen-specific burdens of disease and assessed causes in individual children. FINDINGS: We analysed 5304 sample pairs. For most pathogens, incidence was greater with qPCR than with the original methods, particularly for adenovirus 40/41 (around five times), Shigella spp or enteroinvasive Escherichia coli (EIEC) and Campylobactor jejuni o C coli (around two times), and heat-stable enterotoxin-producing E coli ([ST-ETEC] around 1·5 times). The six most attributable pathogens became, in descending order, Shigella spp, rotavirus, adenovirus 40/41, ST-ETEC, Cryptosporidium spp, and Campylobacter spp. Pathogen-attributable diarrhoeal burden was 89·3% (95% CI 83·2-96·0) at the population level, compared with 51·5% (48·0-55·0) in the original GEMS analysis. The top six pathogens accounted for 77·8% (74·6-80·9) of all attributable diarrhoea. With use of model-derived quantitative cutoffs to assess individual diarrhoeal cases, 2254 (42·5%) of 5304 cases had one diarrhoea-associated pathogen detected and 2063 (38·9%) had two or more, with Shigella spp and rotavirus being the pathogens most strongly associated with diarrhoea in children with mixed infections. INTERPRETATION: A quantitative molecular diagnostic approach improved population-level and case-level characterisation of the causes of diarrhoea and indicated a high burden of disease associated with six pathogens, for which targeted treatment should be prioritised. FUNDING: Bill & Melinda Gates Foundation
- …