578 research outputs found
Benchmarking network propagation methods for disease gene identification
In-silico identification of potential target genes for disease is an essential aspect of drug target discovery. Recent studies suggest that successful targets can be found through by leveraging genetic, genomic and protein interaction information. Here, we systematically tested the ability of 12 varied algorithms, based on network propagation, to identify genes that have been targeted by any drug, on gene-disease data from 22 common non-cancerous diseases in OpenTargets. We considered two biological networks, six performance metrics and compared two types of input gene-disease association scores. The impact of the design factors in performance was quantified through additive explanatory models. Standard cross-validation led to over-optimistic performance estimates due to the presence of protein complexes. In order to obtain realistic estimates, we introduced two novel protein complex-aware cross-validation schemes. When seeding biological networks with known drug targets, machine learning and diffusion-based methods found around 2-4 true targets within the top 20 suggestions. Seeding the networks with genes associated to disease by genetics decreased performance below 1 true hit on average. The use of a larger network, although noisier, improved overall performance. We conclude that diffusion-based prioritisers and machine learning applied to diffusion-based features are suited for drug discovery in practice and improve over simpler neighbour-voting methods. We also demonstrate the large impact of choosing an adequate validation strategy and the definition of seed disease genesPeer ReviewedPostprint (published version
Techno-economic and environmental evaluation of producing chemicals and drop-in aviation biofuels via aqueous phase processing
Novel aqueous-phase processing (APP) techniques can thermochemically convert cellulosic biomass into chemicals and liquid fuels. Here, we evaluate these technologies through process design and simulation, and from a techno-economic and environmental point of view. This is the first peer-reviewed study that conducts such an assessment taking into account different biomass pretreatment methods, process yields, product slates, and hydrogen sources, as well as the historical price variation of a number of core commodities involved in the production. This paper undertakes detailed process simulations for seven biorefinery models designed to convert red maple wood using a set of APP technologies into chemicals (e.g. furfural, hydroxymethylfurfural and gamma-valerolactone) and liquid fuels (e.g. naphtha, jet fuel and diesel). The simulation results are used to conduct a well-to-wake (WTW) lifecycle analysis for greenhouse gas (GHG) emissions, and minimum selling price (MSP) calculations based on historical commodity price data from January 2010 to December 2015. An emphasis has been given towards aviation fuels throughout this work, and the results have been reported and discussed extensively for these fuels. It is found that the WTW GHG emissions and the MSP of jet fuel vary across the different refinery configurations from 31.6–104.5 gCO2e per MJ (64% lower and 19% higher, respectively, than a reported petroleum-derived fuel baseline) and 0.26–1.67 per liter, which is 61% lower and 146% higher, respectively, than the average conventional jet fuel price of the above time frame). It has been shown that the variation in the estimated emissions and fuel selling prices is primarily driven by the choice of hydrogen source and the relative production volumes of chemicals to fuels, respectively. The latter is a consequence of the fact that the APP chemicals considered here have a higher economic value than the liquid transportation fuels, and that their production is less carbon intensive compared to these fuels. However, the chemical market may get saturated if they are produced in large quantities, and increasing biofuel production over that of chemicals can help the biorefinery benefit under renewable fuel programs
Quantifying the climate impacts of albedo changes due to biofuel production: a comparison with biogeochemical effects
Lifecycle analysis is a tool widely used to evaluate the climate impact of greenhouse gas emissions attributable to the production and use of biofuels. In this paper we employ an augmented lifecycle framework that includes climate impacts from changes in surface albedo due to land use change. We consider eleven land-use change scenarios for the cultivation of biomass for middle distillate fuel production, and compare our results to previous estimates of lifecycle greenhouse gas emissions for the same set of land-use change scenarios in terms of CO2e per unit of fuel energy. We find that two of the land-use change scenarios considered demonstrate a warming effect due to changes in surface albedo, compared to conventional fuel, the largest of which is for replacement of desert land with salicornia cultivation. This corresponds to 222 gCO2e/MJ, equivalent to 3890% and 247% of the lifecycle GHG emissions of fuels derived from salicornia and crude oil, respectively. Nine of the land-use change scenarios considered demonstrate a cooling effect, the largest of which is for the replacement of tropical rainforests with soybean cultivation. This corresponds to − 161 gCO2e/MJ, or − 28% and − 178% of the lifecycle greenhouse gas emissions of fuels derived from soybean and crude oil, respectively. These results indicate that changes in surface albedo have the potential to dominate the climate impact of biofuels, and we conclude that accounting for changes in surface albedo is necessary for a complete assessment of the aggregate climate impacts of biofuel production and use.Federal Aviation AdministrationUnited States. Air Force Research LaboratoryUnited States. Defense Logistics Agency (DLA Energy, Project 47 of the Partnership for Air Transportation Noise and Emissions Reduction (PARTNER)
“Exposure Track”—The Impact of Mobile-Device-Based Mobility Patterns on Quantifying Population Exposure to Air Pollution
Air pollution is now recognized as the world’s single largest environmental and human health threat. Indeed, a large number of environmental epidemiological studies have quantified the health impacts of population exposure to pollution. In previous studies, exposure estimates at the population level have not considered spatially- and temporally varying populations present in study regions. Therefore, in the first study of it is kind, we use measured population activity patterns representing several million people to evaluate population-weighted exposure to air pollution on a city-wide scale. Mobile and wireless devices yield information about where and when people are present, thus collective activity patterns were determined using counts of connections to the cellular network. Population-weighted exposure to PM2.5 in New York City (NYC), herein termed “Active Population Exposure” was evaluated using population activity patterns and spatiotemporal PM2.5 concentration levels, and compared to “Home Population Exposure”, which assumed a static population distribution as per Census data. Areas of relatively higher population-weighted exposures were concentrated in different districts within NYC in both scenarios. These were more centralized for the “Active Population Exposure” scenario. Population-weighted exposure computed in each district of NYC for the “Active” scenario were found to be statistically significantly (p < 0.05) different to the “Home” scenario for most districts. In investigating the temporal variability of the “Active” population-weighted exposures determined in districts, these were found to be significantly different (p < 0.05) during the daytime and the nighttime. Evaluating population exposure to air pollution using spatiotemporal population mobility patterns warrants consideration in future environmental epidemiological studies linking air quality and human health
Recommended from our members
Reduced-Order Model for Supersonic Transport Takeoff Noise Scaling with Cruise Mach Number
The recent interest in the development of supersonic transport raises concerns about an increase in community noise around airports. As noise certification standards for supersonic transport other than Concorde have not yet been developed by the International Civil Aviation Organization, there is a need for a physics-based scaling rule for supersonic transport takeoff noise performance. Assuming supersonic transport takeoff noise levels are dominated by the engine mixed jet velocity and the aircraft-to-microphone propagation distance, this paper presents a reduced-order model for supersonic transport takeoff noise levels as a function of four scaling groups: cruise Mach number, takeoff aerodynamic efficiency, takeoff speed, and number of installed engines. This paper finds that, as cruise Mach number increases, supersonic transport takeoff noise levels increase while their thrust cutback noise reduction potential decreases. Assuming constant aerodynamic efficiency, takeoff speed, and number of installed engines, the takeoff noise levels and noise reduction potential of a Mach 2.2 aircraft are found to be similar to 15.3 dB higher and similar to 19.2 dB less compared to a Mach 1.4 aircraft, respectively. This scaling rule can potentially yield a simple guideline for estimating an approximate noise limit for supersonic transport, depending on their cruise Mach number
Impact of the Volkswagen emissions control defeat device on US public health
The US Environmental Protection Agency (EPA) has alleged that Volkswagen Group of America (VW) violated the Clean Air Act (CAA) by developing and installing emissions control system 'defeat devices' (software) in model year 2009–2015 vehicles with 2.0 litre diesel engines. VW has admitted the inclusion of defeat devices. On-road emissions testing suggests that in-use NO[subscript x] emissions for these vehicles are a factor of 10 to 40 above the EPA standard. In this paper we quantify the human health impacts and associated costs of the excess emissions. We propagate uncertainties throughout the analysis. A distribution function for excess emissions is estimated based on available in-use NO[subscript x] emissions measurements. We then use vehicle sales data and the STEP vehicle fleet model to estimate vehicle distance traveled per year for the fleet. The excess NO[subscript x] emissions are allocated on a 50 km grid using an EPA estimate of the light duty diesel vehicle NO[subscript x] emissions distribution. We apply a GEOS-Chem adjoint-based rapid air pollution exposure model to produce estimates of particulate matter and ozone exposure due to the spatially resolved excess NO[subscript x] emissions. A set of concentration-response functions is applied to estimate mortality and morbidity outcomes. Integrated over the sales period (2008–2015) we estimate that the excess emissions will cause 59 (95% CI: 10 to 150) early deaths in the US. When monetizing premature mortality using EPA-recommended data, we find a social cost of ~840m in social costs compared to a counterfactual case without recall
Non-Abelian (p,q) Strings in the Warped Deformed Conifold
We calculate the tension of -strings in the warped deformed conifold
using the non-Abelian DBI action. In the large flux limit, we find exact
agreement with the recent expression obtained by Firouzjahi, Leblond and
Henry-Tye up to and including order terms if is also taken to be
large. Furthermore using the finite prescription for the symmetrised trace
operation we anticipate the most general expression for the tension valid for
any . We find that even in this instance, corrections to the tension
scale as which is not consistent with simple Casimir scaling.Comment: 18 pages, Latex, 1 figure; Added a discussion of the case when the
warp factor parameter and typos correcte
Contrasting the direct radiative effect and direct radiative forcing of aerosols
The direct radiative effect (DRE) of aerosols, which is the instantaneous radiative impact of all atmospheric particles on the Earth's energy balance, is sometimes confused with the direct radiative forcing (DRF), which is the change in DRE from pre-industrial to present-day (not including climate feedbacks). In this study we couple a global chemical transport model (GEOS-Chem) with a radiative transfer model (RRTMG) to contrast these concepts. We estimate a global mean all-sky aerosol DRF of −0.36 Wm[superscript −2] and a DRE of −1.83 Wm[superscript −2] for 2010. Therefore, natural sources of aerosol (here including fire) affect the global energy balance over four times more than do present-day anthropogenic aerosols. If global anthropogenic emissions of aerosols and their precursors continue to decline as projected in recent scenarios due to effective pollution emission controls, the DRF will shrink (−0.22 Wm[superscript −2] for 2100). Secondary metrics, like DRE, that quantify temporal changes in both natural and anthropogenic aerosol burdens are therefore needed to quantify the total effect of aerosols on climate.United States. Environmental Protection Agency (EPA STAR Program)Massachusetts Institute of Technology (Charles E. Reed Faculty Initiative Fund)United States. Environmental Protection Agency (grant/cooperative agreement (RD-83503301)
Recommended from our members
Global Budget and Radiative Forcing of Black Carbon Aerosol: Constraints from Pole-to-Pole (HIPPO) Observations across the Pacific
We use a global chemical transport model (GEOS-Chem) to interpret aircraft curtain observations of black carbon (BC) aerosol over the Pacific from 85°N to 67°S during the 2009–2011 HIAPER (High-Performance Instrumented Airborne Platform for Environmental Research) Pole-to-Pole Observations (HIPPO) campaigns. Observed concentrations are very low, implying much more efficient scavenging than is usually implemented in models. Our simulation with a global source of and mean tropospheric lifetime of 4.2 days (versus 6.8 ± 1.8 days for the Aerosol Comparisons between Observations and Models (AeroCom) models) successfully simulates BC concentrations in source regions and continental outflow and captures the principal features of the HIPPO data but is still higher by a factor of 2 (1.48 for column loads) over the Pacific. It underestimates BC absorbing aerosol optical depths (AAODs) from the Aerosol Robotic Network by 32% on a global basis. Only 8.7% of global BC loading in GEOS-Chem is above 5 km, versus 21 ± 11% for the AeroCom models, with important implications for radiative forcing estimates. Our simulation yields a global BC burden of 77 Gg, a global mean BC AAOD of 0.0017, and a top-of-atmosphere direct radiative forcing (TOA DRF) of , with a range of based on uncertainties in the BC atmospheric distribution. Our TOA DRF is lower than previous estimates in AeroCom, in more recent studies). We argue that these previous estimates are biased high because of excessive BC concentrations over the oceans and in the free troposphere.Engineering and Applied Science
Determining the Effective Density and Stabilizer Layer Thickness of Sterically Stabilized Nanoparticles.
A series of model sterically stabilized diblock copolymer nanoparticles has been designed to aid the development of analytical protocols in order to determine two key parameters: the effective particle density and the steric stabilizer layer thickness. The former parameter is essential for high resolution particle size analysis based on analytical (ultra)centrifugation techniques (e.g., disk centrifuge photosedimentometry, DCP), whereas the latter parameter is of fundamental importance in determining the effectiveness of steric stabilization as a colloid stability mechanism. The diblock copolymer nanoparticles were prepared via polymerization-induced self-assembly (PISA) using RAFT aqueous emulsion polymerization: this approach affords relatively narrow particle size distributions and enables the mean particle diameter and the stabilizer layer thickness to be adjusted independently via systematic variation of the mean degree of polymerization of the hydrophobic and hydrophilic blocks, respectively. The hydrophobic core-forming block was poly(2,2,2-trifluoroethyl methacrylate) [PTFEMA], which was selected for its relatively high density. The hydrophilic stabilizer block was poly(glycerol monomethacrylate) [PGMA], which is a well-known non-ionic polymer that remains water-soluble over a wide range of temperatures. Four series of PGMA x -PTFEMA y nanoparticles were prepared (x = 28, 43, 63, and 98, y = 100-1400) and characterized via transmission electron microscopy (TEM), dynamic light scattering (DLS), and small-angle X-ray scattering (SAXS). It was found that the degree of polymerization of both the PGMA stabilizer and core-forming PTFEMA had a strong influence on the mean particle diameter, which ranged from 20 to 250 nm. Furthermore, SAXS was used to determine radii of gyration of 1.46 to 2.69 nm for the solvated PGMA stabilizer blocks. Thus, the mean effective density of these sterically stabilized particles was calculated and determined to lie between 1.19 g cm(-3) for the smaller particles and 1.41 g cm(-3) for the larger particles; these values are significantly lower than the solid-state density of PTFEMA (1.47 g cm(-3)). Since analytical centrifugation requires the density difference between the particles and the aqueous phase, determining the effective particle density is clearly vital for obtaining reliable particle size distributions. Furthermore, selected DCP data were recalculated by taking into account the inherent density distribution superimposed on the particle size distribution. Consequently, the true particle size distributions were found to be somewhat narrower than those calculated using an erroneous single density value, with smaller particles being particularly sensitive to this artifact
- …