140 research outputs found

    Assessing the Feasibility of National Park Service Management in the Adirondack Park

    Get PDF
    In 1967, two visionary policymakers, Laurance Rockefeller and Conrad Wirth, proposed that a core region of the Adirondack Park be established as a National Park, under the control of the National Park Service. Though unsuccessful, the 1967 proposal addressed a range of contentions, many of which are still relevant today, and also established an ongoing debate on the advantages and disadvantages of federal versus state regulatory action. This paper acknowledges the debate, but finds inherent advantages to an expanded federal role within the Adirondack Park. Times have changed. Attitudes have changed. Tracking the resistance, resentment, attitudes and impacts from 1967, this paper argues that revisiting the idea of having the NPS assume a full or even partial management role of a core region within the Adirondack Park is promising while also acknowledging that the prospect would face significant headwinds

    Scaling in Depth: Unlocking Robustness Certification on ImageNet

    Full text link
    Despite the promise of Lipschitz-based methods for provably-robust deep learning with deterministic guarantees, current state-of-the-art results are limited to feed-forward Convolutional Networks (ConvNets) on low-dimensional data, such as CIFAR-10. This paper investigates strategies for expanding certifiably robust training to larger, deeper models. A key challenge in certifying deep networks is efficient calculation of the Lipschitz bound for residual blocks found in ResNet and ViT architectures. We show that fast ways of bounding the Lipschitz constant for conventional ResNets are loose, and show how to address this by designing a new residual block, leading to the \emph{Linear ResNet} (LiResNet) architecture. We then introduce \emph{Efficient Margin MAximization} (EMMA), a loss function that stabilizes robust training by simultaneously penalizing worst-case adversarial examples from \emph{all} classes. Together, these contributions yield new \emph{state-of-the-art} robust accuracy on CIFAR-10/100 and Tiny-ImageNet under â„“2\ell_2 perturbations. Moreover, for the first time, we are able to scale up fast deterministic robustness guarantees to ImageNet, demonstrating that this approach to robust learning can be applied to real-world applications. We release our code on Github: \url{https://github.com/klasleino/gloro}

    Forecasting Future World Events with Neural Networks

    Full text link
    Forecasting future world events is a challenging but valuable task. Forecasts of climate, geopolitical conflict, pandemics and economic indicators help shape policy and decision making. In these domains, the judgment of expert humans contributes to the best forecasts. Given advances in language modeling, can these forecasts be automated? To this end, we introduce Autocast, a dataset containing thousands of forecasting questions and an accompanying news corpus. Questions are taken from forecasting tournaments, ensuring high quality, real-world importance, and diversity. The news corpus is organized by date, allowing us to precisely simulate the conditions under which humans made past forecasts (avoiding leakage from the future). Motivated by the difficulty of forecasting numbers across orders of magnitude (e.g. global cases of COVID-19 in 2022), we also curate IntervalQA, a dataset of numerical questions and metrics for calibration. We test language models on our forecasting task and find that performance is far below a human expert baseline. However, performance improves with increased model size and incorporation of relevant information from the news corpus. In sum, Autocast poses a novel challenge for large language models and improved performance could bring large practical benefits.Comment: NeurIPS 2022; our dataset is available at https://github.com/andyzoujm/autocas

    Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark

    Full text link
    Artificial agents have traditionally been trained to maximize reward, which may incentivize power-seeking and deception, analogous to how next-token prediction in language models (LMs) may incentivize toxicity. So do agents naturally learn to be Machiavellian? And how do we measure these behaviors in general-purpose models such as GPT-4? Towards answering these questions, we introduce MACHIAVELLI, a benchmark of 134 Choose-Your-Own-Adventure games containing over half a million rich, diverse scenarios that center on social decision-making. Scenario labeling is automated with LMs, which are more performant than human annotators. We mathematize dozens of harmful behaviors and use our annotations to evaluate agents' tendencies to be power-seeking, cause disutility, and commit ethical violations. We observe some tension between maximizing reward and behaving ethically. To improve this trade-off, we investigate LM-based methods to steer agents' towards less harmful behaviors. Our results show that agents can both act competently and morally, so concrete progress can currently be made in machine ethics--designing agents that are Pareto improvements in both safety and capabilities.Comment: ICML 2023 Oral; 31 pages, 5 figure

    Discovery of selective Toxoplasma gondii dihydrofolate reductase inhibitors for the treatment of toxoplasmosis

    Get PDF
    A safer treatment for toxoplasmosis would be achieved by improving the selectivity and potency of dihydrofolate reductase (DHFR) inhibitors, such as pyrimethamine (1), for Toxoplasma gondii DHFR ( TgDHFR) relative to human DHFR ( hDHFR). We previously reported on the identification of meta-biphenyl analog 2, designed by in silico modeling of key differences in the binding pocket between TgDHFR and hDHFR. Compound 2 improves TgDHFR selectivity 6.6-fold and potency 16-fold relative to 1. Here, we report on the optimization and structure-activity relationships of this arylpiperazine series leading to the discovery of 5-(4-(3-(2-methoxypyrimidin-5-yl)phenyl)piperazin-1-yl)pyrimidine-2,4-diamine 3. Compound 3 has a TgDHFR I

    Joint Identification of Genetic Variants for Physical Activity inKorean Population

    Get PDF
    Abstract: There has been limited research on genome-wide association with physical activity (PA). This study ascertained genetic associations between PA and 344,893 single nucleotide polymorphism (SNP) markers in 8842 Korean samples. PA data were obtained from a validated questionnaire that included information on PA intensity and duration. Metabolic equivalent of tasks were calculated to estimate the total daily PA level for each individual. In addition to single- and multiple-SNP association tests, a pathway enrichment analysis was performed to identify the biological significance of SNP markers. Although no significant SNP was found at genome-wide significance level via single-SNP association tests, 59 genetic variants mapped to 76 genes were identified via a multiple SNP approach using a bootstrap selection stability measure. Pathway analysis for these 59 variants showed that maturity onset diabetes of the young (MODY) was enriched. Joint identification of SNPs could enable the identification of multiple SNPs with good predictive power for PA and a pathway enriched for PA

    Designing a Planetary Health Watch: A System for Integrated Monitoring of the Health Effects of, and Responses to, Environmental Change

    Get PDF
    In the new geological epoch of the Anthropocene impacts of human activity on the Earth’s systems may pose major risks to human health. We propose the development of a Planetary Health Watch (PHW) system for integrated monitoring of health effects of, and responses to, global environmental changes. The PHW system will harness new capabilities emerging from the digital revolution to motivate and enable effective responses to threats posed by the transgression of planetary boundaries. It will build on the existing monitoring initiatives as a system aimed at integrated monitoring of environmental change, health effects, and intermediating factors along with the drivers of change and policy responses to protect health. In July 2019, we held a two-day engagement workshop at the Wellcome Trust in London, UK. We convened 59 experts, representatives of existing monitoring initiatives, and potential users of the system to discuss and make recommendations on key aspects of the design of such a system, particularly its scope, opportunities for building on existing initiatives, target users and use cases, strategies for generating impact and key communities for engagement. The scope of monitoring was defined by a framework integrating eight planetary boundaries (climate change, ocean acidification, atmospheric aerosol loading, novel entities, freshwater use, biogeochemical flows, land system change and biosphere integrity) with human health outcomes. (Discussion of the ninth boundary – ozone layer depletion – was omitted because the ozone hole is now healing as a result of the implementation of the Montreal protocol.) As the initial crosscutting areas for the prototype development of PHW, we selected cities, food systems, and links between land use change and human health (emerging diseases and air pollution) to act as foci for the discussion. To build on the existing monitoring efforts, PHW will purse three levels of integration: (1) across health and environmental monitoring, (2) across top down and bottom up monitoring approaches, (3) between advancing knowledge and action that can be taken to protect planetary health. Existing data platforms, large-scale initiatives and networks such as the Multi-Country Multi-City Collaborative Research Network, INDEPTH network of health and demographic surveillance sites in low- and middle-income countries, Resource Watch, Global Burden of Disease project, C40, Global Covenant of Mayors, Sustainable Development Solutions Network and many others will be essential to this process. PHW will aim to add to - the evidence on the emerging risks for human health and the most effective solutions by engaging researchers as a key user community; - awareness of the evidence on impacts and solutions by investing in an outreach strategy that includes clear messages, narratives, and strategically selected messengers; - action to protect planetary health by motivating and enabling decision-makers who influence relevant policies and their implementation across sectors to incorporate planetary health as a priority. The strategies for generating impact will include generation of clear messages comprised of both data and narratives compelling to the individual users, proposing solutions and engaging with those in power to implement them. Scientific oversight and inclusive governance processes will ensure the system’s credibility and legitimacy. The next steps involve engagement with key stakeholders, facilitation of new partnerships, and development of a long-term funding strategy

    Representation Engineering: A Top-Down Approach to AI Transparency

    Full text link
    In this paper, we identify and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience. RepE places population-level representations, rather than neurons or circuits, at the center of analysis, equipping us with novel methods for monitoring and manipulating high-level cognitive phenomena in deep neural networks (DNNs). We provide baselines and an initial analysis of RepE techniques, showing that they offer simple yet effective solutions for improving our understanding and control of large language models. We showcase how these methods can provide traction on a wide range of safety-relevant problems, including honesty, harmlessness, power-seeking, and more, demonstrating the promise of top-down transparency research. We hope that this work catalyzes further exploration of RepE and fosters advancements in the transparency and safety of AI systems.Comment: Code is available at https://github.com/andyzoujm/representation-engineerin

    Aging Hematopoietic Stem Cells Decline in Function and Exhibit Epigenetic Dysregulation

    Get PDF
    Age-related defects in stem cells can limit proper tissue maintenance and hence contribute to a shortened lifespan. Using highly purified hematopoietic stem cells from mice aged 2 to 21 mo, we demonstrate a deficit in function yet an increase in stem cell number with advancing age. Expression analysis of more than 14,000 genes identified 1,500 that were age-induced and 1,600 that were age-repressed. Genes associated with the stress response, inflammation, and protein aggregation dominated the up-regulated expression profile, while the down-regulated profile was marked by genes involved in the preservation of genomic integrity and chromatin remodeling. Many chromosomal regions showed coordinate loss of transcriptional regulation; an overall increase in transcriptional activity with age and inappropriate expression of genes normally regulated by epigenetic mechanisms was also observed. Hematopoietic stem cells from early-aging mice expressing a mutant p53 allele reveal that aging of stem cells can be uncoupled from aging at an organismal level. These studies show that hematopoietic stem cells are not protected from aging. Instead, loss of epigenetic regulation at the chromatin level may drive both functional attenuation of cells, as well as other manifestations of aging, including the increased propensity for neoplastic transformation
    • …
    corecore