eScholarship - University of California

University of California System

eScholarship - University of California
Not a member yet
    534505 research outputs found

    Pavement Environmental Life Cycle Assessment Tool for Local Governments

    Get PDF
    The processes in the pavement life cycle can be defined as: material extraction and production; construction; transport of materials and demolition; the use stage, where the pavement interacts with other systems; the materials, construction, and transport associated with maintenance and rehabilitation; and end-of-life. Local governments are increasingly being asked to quantify greenhouse gas emissions from their operations and identify changes to reduce emissions. There are many possible strategies that local governments can choose to reduce their emissions, however, prioritization and selection of which to implement can be difficult if emissions cannot be quantified. Pavement life cycle assessment (LCA) can be used by local governments to achieve the same goals as state government. The web-based software environmental Life Cycle Assessment for Pavements, also known as eLCAP has been developed a project-level LCA tool. The goal of eLCAP is to permit local governments to perform project-level pavement LCA using California specific data, including consideration of their own designs, materials, and traffic. eLCAP allows modeling of materials, transport, construction, maintenance, rehabilitation, and end-of-life recycling for all impacts; and in the use stage it considers the effects of combustion of fuel in vehicles as well as the additional fuel consumed due to pavement-vehicle interaction (global warming potential only). This report documents eLCAP and a project that created an interface for eLCAP that is usable by local governments

    30 Years of Semiconductor Nanowire Research: A Personal Journey

    Get PDF

    Industrial Data Reduction, Aggregation and Machine Learning-Based Soft Sensing for Etching and Slider Production Tools

    Get PDF
    Smart Manufacturing (SM), which is short for “Smart (Predictive, Preventive, Proactive) zero incident, zero emissions Manufacturing,” describes manufacturing’s digital transformation in which factories, supply chains and ecosystems are integrated, interoperable, and interconnected. Smart Manufacturing is rooted in AI, Machine Learned (ML), and Data Synchronized (DS) modeling to tap into invaluable operating data. By making data actionable at larger scales, SM opens new ways to increase productivity, precision, and process performance.Smart Manufacturing applied to front-end wafer manufacturing in the semiconductor industry offers significant opportunity to increase production throughput and ensure precision by increasing staff and operational productivity. Front-end wafer manufacturing involves multi tool operations for complex material processing that requires a high degree of precision and extensive product qualification. There is a high degree of commonality with semiconductor manufacturing tools, for example etching, that are well instrumented. Companies are already collecting large amounts of operational data from these tools that can be aggregated and leveraged for virtual metrology and other control, diagnostic, and management solutions. AI/ML/DS modeling involves monitoring the state of an operation in real-time to continuously learn and improve on human centered, automated, and autonomous actions. This operational data are embedded in invaluable machine, process, product, and material behaviors as interaction complexities, linearities/non-linearities, and dimensional effects. Because of machine commonalities, data can be selected to draw out operational value across machines. Today’s data science offers considerable capability for qualifying, assessing alignment and contribution, aggregating, and engineering data for more robust modeling. We refer to this as a Data-first strategy to process, engineer and model with AI-Ready data. In this paper, we address AI-Ready data for a virtual metrology solution focused on etching measurement PASS/FAIL classification and milling depth prediction regression tasks using operational data from production machine tools. If the quality of the product can be predicted, the productivity of the metrology process can be increased, which in turn increases the productivity of the overall operation. In a previous paper, we considered how to aggregate data from different etch tools in the same processes at different factories within Seagate Technology and proposed a method for data aggregation and demonstrated its value. The present paper considers how to process and engineer datasets from two different etch tool processes: wafer and slider production. The data processing approaches when used systematically with appropriate ML algorithms demonstrate the potential for reducing metrological interventions in semiconductor manufacturing. Advanced machine learning techniques are used to tackle the modeling challenges of a low failure rate and limited operational variability. XGBoost, a gradient descent-based tree algorithm, outperforms the commonly used Feedforward Neural Networks (FNN) in terms of training speed and resource utilization for binary-classifications. Principal Component Analysis (PCA) effectively reduces the dimensionality of the data and overfitting, while retaining vital variances and significantly reducing noise. Data aggregation with separated scaling harmonizes inputs from diverse manufacturing tools and significantly improves the efficacy and versatility of combining multiple datasets to improve model performance. A live updating transfer learning approach, that periodically updates the FNN models in real-time using Stochastic Gradient Descent (SGD) with individual data points, addresses process drift, and markedly improves predictive accuracy. For the slider production tools, data augmentation with linear Mixup, overcomes a short recording period, enriches the training dataset, and significantly reduces error metrics

    Wildlife Connectivity and Which Median Barrier Designs Provide the Most Effective Permeability for Wildlife Crossings

    Get PDF
    Median barriers are usually constructed to reduce head-on-crashes between vehicles on undivided highways. Because of their position in the center of the traveled right-of-way, median barriers could affect wildlife movement across the right-of-way, decreasing wildlife connectivity. The authors coordinated and met with staff from several Caltrans Districts to gain understanding of their issues related to median barriers and wildlife permeability. The authors used previously and newly collected wildlife-vehicle collision (WVC) observations to test whether or not median types have different effects on unsuccessful wildlife crossings of the road surface. The authors used Generalized Linear Models (GLM) to compare WVC rates among median treatment types in three Caltrans Districts (2, 4, 9) for four wildlife species. The primary findings were that there are effects of median types on rates of WVC and that these effects varied by species and to some degree by geographic region (represented by Caltrans District). The primary finding is that fewer wildlife enter roadways and are killed in the presence of constructed median types than other types. Although this may result in a reduction in WVC, it also results in a reduction in wildlife permeability as most roadways do not have crossing structures and therefore attempts at wildlife permeability will be across the road surface. View the NCST Project Webpag

    Sensing, Understanding, and Augmenting - Methods for Enhancing Human Behavior Analysis using Computational Sensing and Social Signal Processing

    No full text
    From fine-grain filtering of noise required to selectively attend to a specific person in a social setting, to the rapid visual pattern recognition required for complex tasks like radiological diagnosis, human perceptual and cognitive systems are capable of incredible things. Built up over years of evolution in a physical world, people have developed wonderful abilities. However, these capabilities begin to show their limits as society enters into an increasingly digital world, unbound by physical limitations. Human attention, perception, and cognition do not scale up as the volume of information scales up, and given the rapid explosion of digitization of modern work and life, we are simply not able to keep up. Thankfully, this explosion of digital data has co-occurred alongside advances in sensing hardware that can 'see' and 'hear' as people do. Furthermore, advances in computing technology now allow machines to detect signals and recognize patterns in these data in a way that is similar to how trained people can. This presents a new and untapped opportunity for humans to benefit from computing in ways that can help them sense things that happen in the world, develop an understanding of patterns in large volumes of data, and augment our collective capabilities to scale up in the digital age. This dissertation recounts several works (i.e. Lab-in-a-Box, ChronoSense, GutCheck) that seek to better understand how computing can augment the human ability to sense interpersonal behavior and scale up methods for understanding social interactions. I showcase the potential for these approaches through their ability to surface new insights in the domains of human factors, communication, and medicine. Through the lens of applied research and human-centered design, I use these works to argue for an impending revolution in human-computer teamwork, showcasing the promise of computing to evolve from being just a tool to becoming more of a mediator and teammate within group activities. These technologies are not without their unintended consequences, and I close with a discussion of future work needed to proactively mitigate adverse outcomes

    On O-Constructions in Jarawara

    Get PDF
    The language Jarawara (Arauan, spoken in Brazil) exhibits a puzzling set of passive-like properties in its “O-Construction” (Dixon 2000, 2004). We argue that O-Constructions have a type of passive voice in some person combinations but not in others, and that they are unified in that they always have topic agreement on C with the internal argument. We relate this approach to recent research on Algonquian inverse systems (especially Oxford 2023a,b, 2024) which have also been argued to involve a passive-like voice-based alternation for specific person combinations. Our analysis captures facts about case, word order, divergences between C and T agreement, and the distribution of the passive-like prefix hi- (among other properties). Our findings provide support for the approach to person restrictions embodied in Oxford’s work and also demonstrate how topic agreement and the A system can interact. More generally, this work shows how a nuanced approach to passive constructions, and a willingness to separate agreement from voice, can lead to a cross-linguistically grounded analysis of what seems prima facie like an “unusual” construction

    An Integrated Framework for Infectious Disease Control Using Mathematical Modeling and Deep Learning.

    Get PDF
    Infectious diseases are a major global public health concern. Precise modeling and prediction methods are essential to develop effective strategies for disease control. However, data imbalance and the presence of noise and intensity inhomogeneity make disease detection more challenging. Goal: In this article, a novel infectious disease pattern prediction system is proposed by integrating deterministic and stochastic model benefits with the benefits of the deep learning model. Results: The combined benefits yield improvement in the performance of solution prediction. Moreover, the objective is also to investigate the influence of time delay on infection rates and rates associated with vaccination. Conclusions: In this proposed framework, at first, the global stability at disease free equilibrium is effectively analysed using Routh-Haurwitz criteria and Lyapunov method, and the endemic equilibrium is analysed using non-linear Volterra integral equations in the infectious disease model. Unlike the existing model, emphasis is given to suggesting a model that is capable of investigating stability while considering the effect of vaccination and migration rate. Next, the influence of vaccination on the rate of infection is effectively predicted using an efficient deep learning model by employing the long-term dependencies in sequential data. Thus making the prediction more accurate

    Defining and Validating Criteria to Identify Populations Who May Benefit From Home-Based Primary Care.

    Get PDF
    BACKGROUND: Home-based primary care (HBPC) is an important care delivery model for high-need older adults. Currently, target patient populations vary across HBPC programs, hindering expansion and large-scale evaluation. OBJECTIVES: Develop and validate criteria that identify appropriate HBPC target populations. RESEARCH DESIGN: A modified Delphi process was used to achieve expert consensus on criteria for identifying HBPC target populations. All criteria were defined and validated using linked data from Medicare claims and the National Health and Aging Trends Study (NHATS) (cohort n=21,727). Construct validation involved assessing demographics and health outcomes/expenditures for selected criteria. SUBJECTS: Delphi panelists (n=29) represented diverse professional perspectives. Criteria were validated on community-dwelling Medicare beneficiaries (age ≥70) enrolled in NHATS. MEASURES: Criteria were selected via Delphi questionnaires. For construct validation, sociodemographic characteristics of Medicare beneficiaries were self-reported in NHATS, and annual health care expenditures and mortality were obtained via linked Medicare claims. RESULTS: Panelists proposed an algorithm of criteria for HBPC target populations that included indicators for serious illness, functional impairment, and social isolation. The algorithms Delphi-selected criteria applied to 16.8% of Medicare beneficiaries. These HBPC target populations had higher annual health care costs [Med (IQR): 10,851(3316,31,556)vs.10,851 (3316, 31,556) vs. 2830 (913, 9574)] and higher 12-month mortality [15% (95% CI: 14, 17) vs. 5% (95% CI: 4, 5)] compared with the total validation cohort. CONCLUSIONS: We developed and validated an algorithm to define target populations for HBPC, which suggests a need for increased HBPC availability. By enabling objective identification of unmet demands for HBPC access or resources, this algorithm can foster robust evaluation and equitable expansion of HBPC

    Carrie Prudence Winter Kofoid (1866-1942) Biography

    Get PDF

    494,149

    full texts

    534,506

    metadata records
    Updated in last 30 days.
    eScholarship - University of California is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇