15,323 research outputs found

    CGIAR Excellence in Breeding Platform - Plan of Work and Budget 2020

    Get PDF
    At the end of 2019, all CGIAR centers had submitted improvement plans based on an EiB template and in close collaboration with EiB staff while – in a parallel process with breeding programs, funders and private sector representatives – a vision for breeding program modernization was developed and presented to CGIAR breeding leadership at the EiB Annual Meeting. This vision represents an evolution of EiB in the context of the Crops to End Hunger Initiative (CtEH) beyond the initial scope of providing tools, services and expert advice, and serves as a guide for Center leadership to drive changes with EiB support. In addition, EiB has taken the role of managing and disbursing funding, made available by Funders via CtEH to modernize breeding and enable CGIAR breeding programs to implement the vision provided by EiB

    Strengthening School Leadership in Massachusetts

    Get PDF
    In 2017, fifty-six percent of the principals hired statewide were new to the job, with high-poverty schools most likely to hire novice principals. During 2018 and 2019, a working group of district and charter school leaders and other education stakeholders from across the state met to explore ways to increase the effectiveness of principals leading Massachusetts schools. The Barr Foundation engaged Attuned Education Partners to facilitate this group and lead implementation of the learning agendas developed by its members. Together, they prioritized key challenges and identified solutions that research suggests are most likely to strengthen principalship and drive better outcomes for students—especially the students of color and English learners that the state is currently serving least well. This report presents their findings and insights—including recommended actions tailored to state policymakers, school system leaders, principal preparation program providers, and funders. It also offers a collection of case studies demonstrating potential solutions in action

    Label Transfer from APOGEE to LAMOST: Precise Stellar Parameters for 450,000 LAMOST Giants

    Get PDF
    In this era of large-scale stellar spectroscopic surveys, measurements of stellar attributes ("labels," i.e. parameters and abundances) must be made precise and consistent across surveys. Here, we demonstrate that this can be achieved by a data-driven approach to spectral modeling. With The Cannon, we transfer information from the APOGEE survey to determine precise Teff, log g, [Fe/H], and [α\alpha/M] from the spectra of 450,000 LAMOST giants. The Cannon fits a predictive model for LAMOST spectra using 9952 stars observed in common between the two surveys, taking five labels from APOGEE DR12 as ground truth: Teff, log g, [Fe/H], [\alpha/M], and K-band extinction AkA_k. The model is then used to infer Teff, log g, [Fe/H], and [α\alpha/M] for 454,180 giants, 20% of the LAMOST DR2 stellar sample. These are the first [α\alpha/M] values for the full set of LAMOST giants, and the largest catalog of [α\alpha/M] for giant stars to date. Furthermore, these labels are by construction on the APOGEE label scale; for spectra with S/N > 50, cross-validation of the model yields typical uncertainties of 70K in Teff, 0.1 in log g, 0.1 in [Fe/H], and 0.04 in [α\alpha/M], values comparable to the broadly stated, conservative APOGEE DR12 uncertainties. Thus, by using "label transfer" to tie low-resolution (LAMOST R ∌\sim 1800) spectra to the label scale of a much higher-resolution (APOGEE R ∌\sim 22,500) survey, we substantially reduce the inconsistencies between labels measured by the individual survey pipelines. This demonstrates that label transfer with The Cannon can successfully bring different surveys onto the same physical scale.Comment: 27 pages, 14 figures. Accepted by ApJ on 16 Dec 2016, implementing suggestions from the referee reports. Associated code available at https://github.com/annayqho/TheCanno

    The Impact of Principal Preparation on Student Outcomes

    Get PDF
    The purpose of this study was to understand the role of principal preparation on first-year principals\u27 ability to positively impact student outcomes. The study sought to understand key learning experiences that contributed to first-year principals\u27 success upon completion of their preparation program. Using the case study method for this qualitative research study, I interviewed first-year principals to gather data on their perceptions of the learning experiences that led to their success. The research question that guided my qualitative research study was: How does one district-led principal preparation program in a large urban city increase first-year principals\u27 capacity to effectively lead a campus and produce positive outcomes? The study highlighted six best practices that all university-based preparation programs and alternative principal pipelines should implement to enhance its participants\u27 learning experiences and their ability to successfully impact student outcomes within their first academic year of the principalship. The themes that emerged from the study as compelling learning experiences to build instructional leadership that impacted student outcomes were data analysis, observation and feedback, and professional learning communities. Themes based on unexpected challenges during their first year as principals serve as gaps in their learning that would enhance all preparation programs. Those themes were: non-instructional systems related to campus operations, soft skills, and transitioning to the principalship. Based on the theoretical framework created from the literature, field-experience and on-the-job support served as meaningful experiences for the preparation of aspiring leaders. Because principals play a crucial role in a campus\u27s success or failure, aspiring leaders must be adequately prepared to lead a campus. Thus, this study contributes to the literature on principal preparation programs

    Data management and Data Pipelines: An empirical investigation in the embedded systems domain

    Get PDF
    Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines

    Plasma Edge Kinetic-MHD Modeling in Tokamaks Using Kepler Workflow for Code Coupling, Data Management and Visualization

    Get PDF
    A new predictive computer simulation tool targeting the development of the H-mode pedestal at the plasma edge in tokamaks and the triggering and dynamics of edge localized modes (ELMs) is presented in this report. This tool brings together, in a coordinated and effective manner, several first-principles physics simulation codes, stability analysis packages, and data processing and visualization tools. A Kepler workflow is used in order to carry out an edge plasma simulation that loosely couples the kinetic code, XGC0, with an ideal MHD linear stability analysis code, ELITE, and an extended MHD initial value code such as M3D or NIMROD. XGC0 includes the neoclassical ion-electron-neutral dynamics needed to simulate pedestal growth near the separatrix. The Kepler workflow processes the XGC0 simulation results into simple images that can be selected and displayed via the Dashboard, a monitoring tool implemented in AJAX allowing the scientist to track computational resources, examine running and archived jobs, and view key physics data, all within a standard Web browser. The XGC0 simulation is monitored for the conditions needed to trigger an ELM crash by periodically assessing the edge plasma pressure and current density profiles using the ELITE code. If an ELM crash is triggered, the Kepler workflow launches the M3D code on a moderate-size Opteron cluster to simulate the nonlinear ELM crash and to compute the relaxation of plasma profiles after the crash. This process is monitored through periodic outputs of plasma fluid quantities that are automatically visualized with AVS/Express and may be displayed on the Dashboard. Finally, the Kepler workflow archives all data outputs and processed images using HPSS, as well as provenance information about the software and hardware used to create the simulation. The complete process of preparing, executing and monitoring a coupled-code simulation of the edge pressure pedestal buildup and the ELM cycle using the Kepler scientific workflow system is described in this paper

    Visus: An Interactive System for Automatic Machine Learning Model Building and Curation

    Full text link
    While the demand for machine learning (ML) applications is booming, there is a scarcity of data scientists capable of building such models. Automatic machine learning (AutoML) approaches have been proposed that help with this problem by synthesizing end-to-end ML data processing pipelines. However, these follow a best-effort approach and a user in the loop is necessary to curate and refine the derived pipelines. Since domain experts often have little or no expertise in machine learning, easy-to-use interactive interfaces that guide them throughout the model building process are necessary. In this paper, we present Visus, a system designed to support the model building process and curation of ML data processing pipelines generated by AutoML systems. We describe the framework used to ground our design choices and a usage scenario enabled by Visus. Finally, we discuss the feedback received in user testing sessions with domain experts.Comment: Accepted for publication in the 2019 Workshop on Human-In-the-Loop Data Analytics (HILDA'19), co-located with SIGMOD 201

    An overview of the planned CCAT software system

    Get PDF
    CCAT will be a 25m diameter sub-millimeter telescope capable of operating in the 0.2 to 2.1mm wavelength range. It will be located at an altitude of 5600m on Cerro Chajnantor in northern Chile near the ALMA site. The anticipated first generation instruments include large format (60,000 pixel) kinetic inductance detector (KID) cameras, a large format heterodyne array and a direct detection multi-object spectrometer. The paper describes the architecture of the CCAT software and the development strategy.Comment: 17 pages, 6 figures, to appear in Software and Cyberinfrastructure for Astronomy III, Chiozzi & Radziwill (eds), Proc. SPIE 9152, paper ID 9152-10
    • 

    corecore