773 research outputs found

    COMBINATORIAL ASPECTS OF EXCEDANCES AND THE FROBENIUS COMPLEX

    Get PDF
    In this dissertation we study the excedance permutation statistic. We start by extending the classical excedance statistic of the symmetric group to the affine symmetric group eSn and determine the generating function of its distribution. The proof involves enumerating lattice points in a skew version of the root polytope of type A. Next we study the excedance set statistic on the symmetric group by defining a related algebra which we call the excedance algebra. A combinatorial interpretation of expansions from this algebra is provided. The second half of this dissertation deals with the topology of the Frobenius complex, that is the order complex of a poset whose definition was motivated by the classical Frobenius problem. We determine the homotopy type of the Frobenius complex in certain cases using discrete Morse theory. We end with an enumeration of Q-factorial posets. Open questions and directions for future research are located at the end of each chapter

    Model-Twin Randomization (MoTR): A Monte Carlo Method for Estimating the Within-Individual Average Treatment Effect Using Wearable Sensors

    Full text link
    Temporally dense single-person "small data" have become widely available thanks to mobile apps and wearable sensors. Many caregivers and self-trackers want to use these data to help a specific person change their behavior to achieve desired health outcomes. Ideally, this involves discerning possible causes from correlations using that person's own observational time series data. In this paper, we estimate within-individual average treatment effects of physical activity on sleep duration, and vice-versa. We introduce the model twin randomization (MoTR; "motor") method for analyzing an individual's intensive longitudinal data. Formally, MoTR is an application of the g-formula (i.e., standardization, back-door adjustment) under serial interference. It estimates stable recurring effects, as is done in n-of-1 trials and single case experimental designs. We compare our approach to standard methods (with possible confounding) to show how to use causal inference to make better personalized recommendations for health behavior change, and analyze 222 days of Fitbit sleep and steps data for one of the authors.Comment: 27 pages, 2 figures, 5 tables; appendix include

    Substantive Due Process and the Involuntary Confinement of Sexually Violent Predators

    Get PDF
    Over the past fifty years the Supreme Court has been repeatedly asked to address the constitutionality of civil commitment laws, including laws specifically targeting sexually violent predators (SVPs). The SVP laws have withstood challenge, in each instance redeemed by their putative civil purpose. Today, however, roughly 13 years after the first modern SVP law was enacted by the State of Washington, serious concern exists over whether the laws are fulfilling their civil purpose, or are merely serving as vehicles for impermissible preventive detention. This Article addresses this question, in the process exploring the viability of the major remaining constitutional basis to challenge the laws: that police power commitment authority entails a strong, enforceable right to treatment, indeed, effective treatment, designed to achieve material progress toward community re-entry. Our argument takes on the conventional judicial wisdom that discounts the right to treatment for the dangerous mentally disordered. Tying together thirty years of the Court\u27s cases, we establish that without treatment progress, involuntary civil commitment, although perhaps legitimate ab initio, can over time become unconstitutional punishment. To ensure that this principle is honored, we devise an analytic framework that is sensitive to the deference properly owed states in the operation of their SVP regimes, yet fulfills the imperative that courts carefully scrutinize state deprivations of liberty on the cusp of the shadowy civil-criminal divide. The Article begins by discussing Seling v. Young, in which the Supreme Court turned back a right-to-treatment claim for habeas relief. Noting that Young was an ex post facto/double jeopardy case, the Article turns to the Court\u27s substantive due process jurisprudence. We argue that there are limits to the hands-off deference that courts often express regarding treatment claims. The right to treatment is often thought of in monolithic terms. We argue that the constitutional right to treatment has several distinct analytic branches. Treatment must be a purpose of commitment if it is to retain its civil cast. However, for a variety of reasons, the Constitution does not impose a treatment-amenability standard as a pre-condition for police power commitment. Further, the details of treatment are properly left to the discretion of state professionals. Nonetheless, the duration of confinement must be related to its treatment purpose. This principle suggests the invalidity of commitments whose duration is excessive in relation to a treatment purpose. In a series of cases, the Court has insisted that commitments, even those valid ab initio, must end when their constitutional underpinnings disappear. We argue that this principle requires release if effective treatment has not been provided within a reasonable period

    Identifying Transmission and Risk Factors for COVID-19

    Get PDF
    Over the past few months, the rise of a novel coronavirus (COVID-19) has taken the world by storm. The virus has spread rapidly throughout many world communities, leading to its current classification as a pandemic. Many researchers have already published insights and observations on the disease to shed light on this new adversary, while additional research is still ongoing. The US government and medical professionals recently called on the data science community to reveal insights into this virus through the COVID-19 Open Research Dataset (CORD-19), a massive collection of coronavirus-related research. Contained within CORD-19 are insights into risk factors and transmission behavior for COVID-19. This work assembles sections of CORD-19 data into an easily-readable format and uses this output to answer specific questions about the disease, such as environmental factors that correlate with increased transmission and other factors of risk that may contribute towards an individual patient contracting the virus. Additionally, the findings unveiled by this approach are further validated using machine learning algorithms on auxiliary datasets gathered from a few countries impacted by the virus

    Using Knowledge Management to Leverage Fusion Centers to Assess Regional Threats

    Get PDF
    Data fusion centers have been around since the 9/11 terrorist attacks but have been shackled with an underperforming label. Products from fusion centers often do not support the mission of the organizations supporting the centers. This paper proposes using Structure Analytic Technique exercises coupled with applying knowledge management tools and concepts to improve knowledge flow and the relevance and quality of the intelligence products produced

    Anaerobic Fermentation of Glycerol to Ethanol

    Get PDF
    The purpose of this design project is to examine the plant-scale economic viability of the anaerobic fermentation of crude glycerol to ethanol by a hypothetical wild strain of Escherichia coli. The manufactured ethanol, before being denatured with gasoline, has a purity requirement of 99.5% by weight. The capacity of the ethanol plant, as suggested by the problem statement, is 50 MM gallons per year. The process uses crude glycerol (a waste byproduct from the biodiesel industry) as a primary feedstock, so the manufactured ethanol can be considered a “green” or renewable fuel source. The process energy requirements must meet the current energy benchmark of 35,000 BTU/gallon of ethanol, typical for a modern corn-to-ethanol process of this scale according to the design problem statement. This goal is more than met, with an energy usage of 8,000 BTU/gallon of ethanol. The process design consists of three main sections: upstream preparation of the glycerol feed for the E. coli, anaerobic fermentation of this glycerol feed to ethanol and succinic acid (a valuable specialty chemical and a side-product of fermentation), and downstream separation to recover the ethanol and succinic acid. When performing the economic analysis, the plant was assumed to be a grass roots plant located in the Gulf Coast region of the United States. The total capital investment is 108million,includingaworkingcapitalof108 million, including a working capital of 23.6 million. In the base case scenario, with crude glycerol priced at 0.05/lb,ethanolpricedat0.05/lb, ethanol priced at 2.50/gallon, gasoline priced at 3.15/gallon,andsuccinicacidpricedat3.15/gallon, and succinic acid priced at 2.00/lb, the net present value (NPV) of the project is $95 MM based on an interest rate of 15%, and the investor’s rate of return (IRR) is 32.24%. The process profitability improves with increasing crude oil prices and decreasing crude glycerol prices, which we believe are highly likely scenarios based on our market research

    ASTR 135N.03: Star, Galaxies, and the Universe Lab

    Get PDF

    On the universal structure of human lexical semantics

    Full text link
    How universal is human conceptual structure? The way concepts are organized in the human brain may reflect distinct features of cultural, historical, and environmental background in addition to properties universal to human cognition. Semantics, or meaning expressed through language, provides direct access to the underlying conceptual structure, but meaning is notoriously difficult to measure, let alone parameterize. Here we provide an empirical measure of semantic proximity between concepts using cross-linguistic dictionaries. Across languages carefully selected from a phylogenetically and geographically stratified sample of genera, translations of words reveal cases where a particular language uses a single polysemous word to express concepts represented by distinct words in another. We use the frequency of polysemies linking two concepts as a measure of their semantic proximity, and represent the pattern of such linkages by a weighted network. This network is highly uneven and fragmented: certain concepts are far more prone to polysemy than others, and there emerge naturally interpretable clusters loosely connected to each other. Statistical analysis shows such structural properties are consistent across different language groups, largely independent of geography, environment, and literacy. It is therefore possible to conclude the conceptual structure connecting basic vocabulary studied is primarily due to universal features of human cognition and language use.Comment: Press embargo in place until publicatio

    Role of Multiple High-Capacity Irrigation Wells on a Surficial Sand and Gravel Aquifer

    Get PDF
    Within McHenry County, IL, the fastest growing county in Illinois, groundwater is used for 100% of the water needs. Concerns over water resources have prompted the investigation of the surficial sand and gravel aquifers of the county. While the eastern portion of the county is urbanizing, the western portion remains devoted to agriculture. High-capacity irrigation wells screened within the surficial sand and gravel aquifer are used for crop production. To assess the impacts of the irrigation wells on the aquifer, a groundwater flow model was developed to examine five different scenarios reflecting drought conditions and increased pumping. Results show that the surficial sand and gravel aquifer is capable of meeting current water demands even if recharge is decreased 20% and pumping is increased 20%. The additional loss of discharge and increases in pumping result in head differences throughout the aquifer

    Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

    Get PDF
    Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning. In this work, we show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering. In fact, one can use null prompts, prompts that contain neither task-specific templates nor training examples, and achieve competitive accuracy to manually-tuned prompts across a wide range of tasks. While finetuning LMs does introduce new parameters for each downstream task, we show that this memory overhead can be substantially reduced-finetuning only the bias terms can achieve comparable or better accuracy than standard finetuning while only updating 0.1% of the parameters. All in all, we recommend finetuning LMs for few-shot learning as it is more accurate, has relatively stable performance across different prompts, and can be made nearly as efficient as using frozen LMs
    corecore