California Digital Library

eScholarship - University of California
Not a member yet
    440292 research outputs found

    Applications of Deep Learning: Automated Assessment of Vascular Tortuosity in Mouse Models of Oxygen-Induced Retinopathy.

    No full text
    OBJECTIVE: To develop a generative adversarial network (GAN) to segment major blood vessels from retinal flat-mount images from oxygen-induced retinopathy (OIR) and demonstrate the utility of these GAN-generated vessel segmentations in quantifying vascular tortuosity. DESIGN: Development and validation of GAN. SUBJECTS: Three datasets containing 1084, 50, and 20 flat-mount mice retina images with various stains used and ages at sacrifice acquired from previously published manuscripts. METHODS: Four graders manually segmented major blood vessels from flat-mount images of retinas from OIR mice. Pix2Pix, a high-resolution GAN, was trained on 984 pairs of raw flat-mount images and manual vessel segmentations and then tested on 100 and 50 image pairs from a held-out and external test set, respectively. GAN-generated and manual vessel segmentations were then used as an input into a previously published algorithm (iROP-Assist) to generate a vascular cumulative tortuosity index (CTI) for 20 image pairs containing mouse eyes treated with aflibercept versus control. MAIN OUTCOME MEASURES: Mean dice coefficients were used to compare segmentation accuracy between the GAN-generated and manually annotated segmentation maps. For the image pairs treated with aflibercept versus control, mean CTIs were also calculated for both GAN-generated and manual vessel maps. Statistical significance was evaluated using Wilcoxon signed-rank tests (P ≤ 0.05 threshold for significance). RESULTS: The dice coefficient for the GAN-generated versus manual vessel segmentations was 0.75 ± 0.27 and 0.77 ± 0.17 for the held-out test set and external test set, respectively. The mean CTI generated from the GAN-generated and manual vessel segmentations was 1.12 ± 0.07 versus 1.03 ± 0.02 (P = 0.003) and 1.06 ± 0.04 versus 1.01 ± 0.01 (P < 0.001), respectively, for eyes treated with aflibercept versus control, demonstrating that vascular tortuosity was rescued by aflibercept when quantified by GAN-generated and manual vessel segmentations. CONCLUSIONS: GANs can be used to accurately generate vessel map segmentations from flat-mount images. These vessel maps may be used to evaluate novel metrics of vascular tortuosity in OIR, such as CTI, and have the potential to accelerate research in treatments for ischemic retinopathies. FINANCIAL DISCLOSURES: The author(s) have no proprietary or commercial interest in any materials discussed in this article

    Customer outcomes in Pay-As-You-Save programs

    No full text
    We review the energy and financial outcomes of households participating in several programs based on successive versions of the Pay As You Save¯ (PAYS¯) system. PAYS¯ programs offer non-debt financing for energy efficiency (and sometimes other technologies) in residential buildings through a tariff attached to the home’s utility meter, designed to be offset by project savings. We find that the five programs we study generally serve customers living in zip codes with levels of income and education below the national average and unemployment rates above the national average, demonstrating their potential to improve equity in energy efficiency adoption. Using weather-normalized analysis of energy consumption data, we show that most customers of Midwest Energy’s program reduce annual electricity and gas consumption, averaging 15% and 26% reductions respectively. Changes in energy consumption calculated using this method represent a combination of project effects and changes in occupant behavior. These results are similar to existing analyses of PAYS¯ programs in North Carolina, Arkansas, and Tennessee. About half of participating Midwest households generate sufficient energy cost savings to cover their monthly tariff. Various factors, including changes in occupant behavior, program error, causes independent of the customer or program, or some combination thereof may explain lower-than-expected cost reductions in some projects. Given the inherent variability in annual household electricity consumption, we feel these programs are enabling energy efficiency improvements and their attendant co-benefits, including occupant health and comfort and reduced carbon emissions, while reasonably balancing energy savings and tariff costs. Pairing PAYS¯ with additional financial assistance, as well as promoting cost-effective measures such as air and duct sealing, could further broaden program participation by enabling additional projects to meet PAYS¯ program eligibility rules

    The HPC Best Practices Webinar Series

    No full text

    Can We Fix It Automatically? Development of Fault Auto-Correction Algorithms for HVAC and Lighting Systems

    No full text
    A fault detection and diagnostics (FDD) tool is a type of energy management and information system designed to continuously identify the presence of faults and efficiency improvement opportunities through a one-way interface to the building automation system and application of automated analytics. Building owners and operators at the leading edge of technology adoption are using FDD tools to enable average whole-building portfolio savings of 8 percent. Although FDD tools can inform building operators of operational faults, currently a manual action is always required to correct faults and generate the associated energy savings. A subset of faults, however, such as biased sensors and manual override, can be addressed automatically, removing the need for operations and maintenance staff intervention. Automating this fault “correction” can significantly increase the savings generated by FDD tools and reduce the reliance on human intervention. Doing so is expected to advance the usability, as well as the technical and economic performance, of FDD technologies. In this paper, we present the development of 10 innovative fault auto-correction algorithms for HVAC and lighting systems. When the auto-correction routine is triggered, it will overwrite the control setpoints or other variables (via BACnet or other protocol) to implement the intended changes. These algorithms are able to automatically correct the faults or improve the operation associated with an incorrectly programmed schedule, override manual control, sensor bias, control hunting, rogue zone, and less aggressive setpoints/setpoints setback. The paper will also discuss the implementation of the auto-correction algorithms in FDD software products

    In den Häusern der anderen: Spüren deutscher Vergangenheit in Westpolen

    No full text
    Erschienen beim Ch. Links Verlag, 2022&nbsp

    Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased

    No full text
    A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust

    Plant-level performance and degradation of 31 GW-DC of utility-scale PV in the United States

    No full text
    In this updated study, which samples 50% more capacity than the original and adds two additional years of operating history, we assess the performance of a fleet of 631 utility-scale PV plants totaling 31.0 GW-DC (23.6 GW-AC) of capacity that achieved commercial operations in the United States from 2007-2018 and that have operated for at least two full calendar years. We use detailed information on individual plant characteristics, in conjunction with modeled irradiance data, to model expected or “ideal” capacity factors in each full calendar year of each plant’s operating history. A comparison of ideal versus actual first-year capacity factors finds that this fleet has modestly underperformed initial expectations (as modeled) on average, though perhaps due as much to modeling issues as to actual underperformance. We then analyze fleet-wide performance degradation in subsequent years by employing a “fixed effects” regression model to statistically isolate the impact of age on plant performance. The resulting average fleet-wide degradation rate of -1.2%/year (±0.1%) represents a slight improvement (seemingly driven by the oldest plants in our sample) over the -1.3%/year (±0.2%) found in our original study, yet is still of greater magnitude than is commonly found. We emphasize, however, that these fleet-wide estimates reflect both recoverable and unrecoverable degradation across the entire plant, and so will naturally be of greater magnitude than module- or cell-level studies, and/or studies that focus only on unrecoverable degradation. Moreover, when focusing on a sub-sample of newer and larger plants with higher DC:AC ratios—i.e., plants that more-closely resemble what is being built today—we find a more moderate sample-wide average performance decline of -0.7%/year (±0.4%), which is more in line with other estimates from the recent literature

    [Cantonal Differences in The Implementation of Involuntary Admission in Switzerland].

    No full text
    OBJECTIVE: To examine sociodemographic and clinical characteristics of persons hospitalized in five psychiatric hospitals from regions with different structural characteristics compared with persons hospitalized voluntarily. METHODS: Descriptive analyses of routine data on approximately 57000 cases of 33000 patients treated for a primary ICD-10 psychiatric diagnosis at one of the participating hospitals from 2016 to 2019. RESULTS: Admission rates, length of stay, rates of further coercive measures, sociodemographic and clinical characteristics of the affected persons differ between the different regions. CONCLUSION: There are considerable regional differences between regulations and implementation of the admission procedures and the sample. Causal relationships between regional specifics and the results cannot be inferred

    Comment on “Five Decades of Observed Daily Precipitation Reveal Longer and More Variable Drought Events Across Much of the Western United States”

    No full text
    Changes in precipitation patterns with climate change could have important impacts on human and natural systems. Zhang et al. (2021, report trends in daily precipitation patterns over the last five decades in the western United States, focusing on meteorological drought. They report that dry intervals (calculated at the annual or seasonal level) have increased across much of the southwestern U.S., with statistical assessment suggesting the results are statistically robust. However, Zhang et al. (2021, preprocess their annual (or seasonal) averages to compute 5-year moving window averages before using established statistical techniques for trend analysis that assume independence about some fixed trend. Here we show that the moving window preprocessing violates that independence assumption and inflates the statistical significance of their trend estimates. This raises questions about the robustness of their results. We conclude by discussing the difficulty of adjusting for spatial structure when assessing time trends in a regional context


    full texts


    metadata records
    Updated in last 30 days.
    eScholarship - University of California is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇