567 research outputs found

    Benchmarking network propagation methods for disease gene identification

    Get PDF
    In-silico identification of potential target genes for disease is an essential aspect of drug target discovery. Recent studies suggest that successful targets can be found through by leveraging genetic, genomic and protein interaction information. Here, we systematically tested the ability of 12 varied algorithms, based on network propagation, to identify genes that have been targeted by any drug, on gene-disease data from 22 common non-cancerous diseases in OpenTargets. We considered two biological networks, six performance metrics and compared two types of input gene-disease association scores. The impact of the design factors in performance was quantified through additive explanatory models. Standard cross-validation led to over-optimistic performance estimates due to the presence of protein complexes. In order to obtain realistic estimates, we introduced two novel protein complex-aware cross-validation schemes. When seeding biological networks with known drug targets, machine learning and diffusion-based methods found around 2-4 true targets within the top 20 suggestions. Seeding the networks with genes associated to disease by genetics decreased performance below 1 true hit on average. The use of a larger network, although noisier, improved overall performance. We conclude that diffusion-based prioritisers and machine learning applied to diffusion-based features are suited for drug discovery in practice and improve over simpler neighbour-voting methods. We also demonstrate the large impact of choosing an adequate validation strategy and the definition of seed disease genesPeer ReviewedPostprint (published version

    Techno-economic and environmental evaluation of producing chemicals and drop-in aviation biofuels via aqueous phase processing

    Get PDF
    Novel aqueous-phase processing (APP) techniques can thermochemically convert cellulosic biomass into chemicals and liquid fuels. Here, we evaluate these technologies through process design and simulation, and from a techno-economic and environmental point of view. This is the first peer-reviewed study that conducts such an assessment taking into account different biomass pretreatment methods, process yields, product slates, and hydrogen sources, as well as the historical price variation of a number of core commodities involved in the production. This paper undertakes detailed process simulations for seven biorefinery models designed to convert red maple wood using a set of APP technologies into chemicals (e.g. furfural, hydroxymethylfurfural and gamma-valerolactone) and liquid fuels (e.g. naphtha, jet fuel and diesel). The simulation results are used to conduct a well-to-wake (WTW) lifecycle analysis for greenhouse gas (GHG) emissions, and minimum selling price (MSP) calculations based on historical commodity price data from January 2010 to December 2015. An emphasis has been given towards aviation fuels throughout this work, and the results have been reported and discussed extensively for these fuels. It is found that the WTW GHG emissions and the MSP of jet fuel vary across the different refinery configurations from 31.6–104.5 gCO2e per MJ (64% lower and 19% higher, respectively, than a reported petroleum-derived fuel baseline) and 1.006.31pergallon(1.00–6.31 per gallon (0.26–1.67 per liter, which is 61% lower and 146% higher, respectively, than the average conventional jet fuel price of the above time frame). It has been shown that the variation in the estimated emissions and fuel selling prices is primarily driven by the choice of hydrogen source and the relative production volumes of chemicals to fuels, respectively. The latter is a consequence of the fact that the APP chemicals considered here have a higher economic value than the liquid transportation fuels, and that their production is less carbon intensive compared to these fuels. However, the chemical market may get saturated if they are produced in large quantities, and increasing biofuel production over that of chemicals can help the biorefinery benefit under renewable fuel programs

    Quantifying the climate impacts of albedo changes due to biofuel production: a comparison with biogeochemical effects

    Get PDF
    Lifecycle analysis is a tool widely used to evaluate the climate impact of greenhouse gas emissions attributable to the production and use of biofuels. In this paper we employ an augmented lifecycle framework that includes climate impacts from changes in surface albedo due to land use change. We consider eleven land-use change scenarios for the cultivation of biomass for middle distillate fuel production, and compare our results to previous estimates of lifecycle greenhouse gas emissions for the same set of land-use change scenarios in terms of CO2e per unit of fuel energy. We find that two of the land-use change scenarios considered demonstrate a warming effect due to changes in surface albedo, compared to conventional fuel, the largest of which is for replacement of desert land with salicornia cultivation. This corresponds to 222 gCO2e/MJ, equivalent to 3890% and 247% of the lifecycle GHG emissions of fuels derived from salicornia and crude oil, respectively. Nine of the land-use change scenarios considered demonstrate a cooling effect, the largest of which is for the replacement of tropical rainforests with soybean cultivation. This corresponds to − 161 gCO2e/MJ, or − 28% and − 178% of the lifecycle greenhouse gas emissions of fuels derived from soybean and crude oil, respectively. These results indicate that changes in surface albedo have the potential to dominate the climate impact of biofuels, and we conclude that accounting for changes in surface albedo is necessary for a complete assessment of the aggregate climate impacts of biofuel production and use.Federal Aviation AdministrationUnited States. Air Force Research LaboratoryUnited States. Defense Logistics Agency (DLA Energy, Project 47 of the Partnership for Air Transportation Noise and Emissions Reduction (PARTNER)

    “Exposure Track”—The Impact of Mobile-Device-Based Mobility Patterns on Quantifying Population Exposure to Air Pollution

    Get PDF
    Air pollution is now recognized as the world’s single largest environmental and human health threat. Indeed, a large number of environmental epidemiological studies have quantified the health impacts of population exposure to pollution. In previous studies, exposure estimates at the population level have not considered spatially- and temporally varying populations present in study regions. Therefore, in the first study of it is kind, we use measured population activity patterns representing several million people to evaluate population-weighted exposure to air pollution on a city-wide scale. Mobile and wireless devices yield information about where and when people are present, thus collective activity patterns were determined using counts of connections to the cellular network. Population-weighted exposure to PM2.5 in New York City (NYC), herein termed “Active Population Exposure” was evaluated using population activity patterns and spatiotemporal PM2.5 concentration levels, and compared to “Home Population Exposure”, which assumed a static population distribution as per Census data. Areas of relatively higher population-weighted exposures were concentrated in different districts within NYC in both scenarios. These were more centralized for the “Active Population Exposure” scenario. Population-weighted exposure computed in each district of NYC for the “Active” scenario were found to be statistically significantly (p < 0.05) different to the “Home” scenario for most districts. In investigating the temporal variability of the “Active” population-weighted exposures determined in districts, these were found to be significantly different (p < 0.05) during the daytime and the nighttime. Evaluating population exposure to air pollution using spatiotemporal population mobility patterns warrants consideration in future environmental epidemiological studies linking air quality and human health

    Impact of the Volkswagen emissions control defeat device on US public health

    Get PDF
    The US Environmental Protection Agency (EPA) has alleged that Volkswagen Group of America (VW) violated the Clean Air Act (CAA) by developing and installing emissions control system 'defeat devices' (software) in model year 2009–2015 vehicles with 2.0 litre diesel engines. VW has admitted the inclusion of defeat devices. On-road emissions testing suggests that in-use NO[subscript x] emissions for these vehicles are a factor of 10 to 40 above the EPA standard. In this paper we quantify the human health impacts and associated costs of the excess emissions. We propagate uncertainties throughout the analysis. A distribution function for excess emissions is estimated based on available in-use NO[subscript x] emissions measurements. We then use vehicle sales data and the STEP vehicle fleet model to estimate vehicle distance traveled per year for the fleet. The excess NO[subscript x] emissions are allocated on a 50 km grid using an EPA estimate of the light duty diesel vehicle NO[subscript x] emissions distribution. We apply a GEOS-Chem adjoint-based rapid air pollution exposure model to produce estimates of particulate matter and ozone exposure due to the spatially resolved excess NO[subscript x] emissions. A set of concentration-response functions is applied to estimate mortality and morbidity outcomes. Integrated over the sales period (2008–2015) we estimate that the excess emissions will cause 59 (95% CI: 10 to 150) early deaths in the US. When monetizing premature mortality using EPA-recommended data, we find a social cost of ~450moverthesalesperiod.Forthecurrentfleet,weestimatethatareturntocomplianceforallaffectedvehiclesbytheendof2016willavert 130earlydeathsandavoid 450m over the sales period. For the current fleet, we estimate that a return to compliance for all affected vehicles by the end of 2016 will avert ~130 early deaths and avoid ~840m in social costs compared to a counterfactual case without recall

    Non-Abelian (p,q) Strings in the Warped Deformed Conifold

    Get PDF
    We calculate the tension of (p,q)(p,q)-strings in the warped deformed conifold using the non-Abelian DBI action. In the large flux limit, we find exact agreement with the recent expression obtained by Firouzjahi, Leblond and Henry-Tye up to and including order 1/M21/M^2 terms if qq is also taken to be large. Furthermore using the finite qq prescription for the symmetrised trace operation we anticipate the most general expression for the tension valid for any (p,q)(p,q). We find that even in this instance, corrections to the tension scale as 1/M21/M^2 which is not consistent with simple Casimir scaling.Comment: 18 pages, Latex, 1 figure; Added a discussion of the case when the warp factor parameter b1b\neq 1 and typos correcte

    Contrasting the direct radiative effect and direct radiative forcing of aerosols

    Get PDF
    The direct radiative effect (DRE) of aerosols, which is the instantaneous radiative impact of all atmospheric particles on the Earth's energy balance, is sometimes confused with the direct radiative forcing (DRF), which is the change in DRE from pre-industrial to present-day (not including climate feedbacks). In this study we couple a global chemical transport model (GEOS-Chem) with a radiative transfer model (RRTMG) to contrast these concepts. We estimate a global mean all-sky aerosol DRF of −0.36 Wm[superscript −2] and a DRE of −1.83 Wm[superscript −2] for 2010. Therefore, natural sources of aerosol (here including fire) affect the global energy balance over four times more than do present-day anthropogenic aerosols. If global anthropogenic emissions of aerosols and their precursors continue to decline as projected in recent scenarios due to effective pollution emission controls, the DRF will shrink (−0.22 Wm[superscript −2] for 2100). Secondary metrics, like DRE, that quantify temporal changes in both natural and anthropogenic aerosol burdens are therefore needed to quantify the total effect of aerosols on climate.United States. Environmental Protection Agency (EPA STAR Program)Massachusetts Institute of Technology (Charles E. Reed Faculty Initiative Fund)United States. Environmental Protection Agency (grant/cooperative agreement (RD-83503301)

    Determining the Effective Density and Stabilizer Layer Thickness of Sterically Stabilized Nanoparticles.

    Get PDF
    A series of model sterically stabilized diblock copolymer nanoparticles has been designed to aid the development of analytical protocols in order to determine two key parameters: the effective particle density and the steric stabilizer layer thickness. The former parameter is essential for high resolution particle size analysis based on analytical (ultra)centrifugation techniques (e.g., disk centrifuge photosedimentometry, DCP), whereas the latter parameter is of fundamental importance in determining the effectiveness of steric stabilization as a colloid stability mechanism. The diblock copolymer nanoparticles were prepared via polymerization-induced self-assembly (PISA) using RAFT aqueous emulsion polymerization: this approach affords relatively narrow particle size distributions and enables the mean particle diameter and the stabilizer layer thickness to be adjusted independently via systematic variation of the mean degree of polymerization of the hydrophobic and hydrophilic blocks, respectively. The hydrophobic core-forming block was poly(2,2,2-trifluoroethyl methacrylate) [PTFEMA], which was selected for its relatively high density. The hydrophilic stabilizer block was poly(glycerol monomethacrylate) [PGMA], which is a well-known non-ionic polymer that remains water-soluble over a wide range of temperatures. Four series of PGMA x -PTFEMA y nanoparticles were prepared (x = 28, 43, 63, and 98, y = 100-1400) and characterized via transmission electron microscopy (TEM), dynamic light scattering (DLS), and small-angle X-ray scattering (SAXS). It was found that the degree of polymerization of both the PGMA stabilizer and core-forming PTFEMA had a strong influence on the mean particle diameter, which ranged from 20 to 250 nm. Furthermore, SAXS was used to determine radii of gyration of 1.46 to 2.69 nm for the solvated PGMA stabilizer blocks. Thus, the mean effective density of these sterically stabilized particles was calculated and determined to lie between 1.19 g cm(-3) for the smaller particles and 1.41 g cm(-3) for the larger particles; these values are significantly lower than the solid-state density of PTFEMA (1.47 g cm(-3)). Since analytical centrifugation requires the density difference between the particles and the aqueous phase, determining the effective particle density is clearly vital for obtaining reliable particle size distributions. Furthermore, selected DCP data were recalculated by taking into account the inherent density distribution superimposed on the particle size distribution. Consequently, the true particle size distributions were found to be somewhat narrower than those calculated using an erroneous single density value, with smaller particles being particularly sensitive to this artifact

    Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia

    Get PDF
    Emerging neural theories of consciousness suggest a correlation between a specific type of neural dynamical complexity and the level of consciousness: When awake and aware, causal interactions between brain regions are both integrated (all regions are to a certain extent connected) and differentiated (there is inhomogeneity and variety in the interactions). In support of this, recent work by Casali et al (2013) has shown that Lempel-Ziv complexity correlates strongly with conscious level, when computed on the EEG response to transcranial magnetic stimulation. Here we investigated complexity of spontaneous high-density EEG data during propofol-induced general anaesthesia. We consider three distinct measures: (i) Lempel-Ziv complexity, which is derived from how compressible the data are; (ii) amplitude coalition entropy, which measures the variability in the constitution of the set of active channels; and (iii) the novel synchrony coalition entropy (SCE), which measures the variability in the constitution of the set of synchronous channels. After some simulations on Kuramoto oscillator models which demonstrate that these measures capture distinct ‘flavours’ of complexity, we show that there is a robustly measurable decrease in the complexity of spontaneous EEG during general anaesthesia
    corecore