202 research outputs found

    Investing in the Housing Crisis: An exploration of the North Carolina public pension system's relationship with Landmark Partners and the Single Family Rental industry

    Get PDF
    North Carolina has a corporate landlord problem. Large investors now own over 40,000 single family homes in North Carolina1, squeezing out would-be homebuyers and burdening renters with rising rental costs and prolonged maintenance issues. Some of these corporate rental companies are owned or backed by private equity firms that receive funding from public pension systems, including the North Carolina Retirement System (NCRS). The North Carolina Retirement System has committed more than 3.2billiontoonesuchprivateequityfirm, LandmarkPartners,since2014.3.2 billion to one such private equity firm, Landmark Partners, since 2014. 2.6 billion of these commitments to Landmark have been made since Dale Folwell became State Treasurer and took over responsibility for the pension fund in 2017. No other pension fund has invested more than $500 million in Landmark during the 2017 - 2022 time period.This matters because Landmark is a major investor in Progress Residential,4 the largest single family rental company in the U.S. with over 7,700 homes in North Carolina

    Progress for Who?: Progress Residential Preys on Renters as it Buys Up Homes in Tennessee and the U.S. South

    Get PDF
    Recent headlines have called attention to the expansion of corporate investors in the single-family rental home industry. Corporate landlords' growing acquisition of homes is particularly high in cities throughout the U.S. South, where a dire lack of renter protections has abetted rapid gentrification. In this context, the National Rental Home Council (NRHC), a real estate industry group headed by the largest single-family rental (SFR) landlords to advance their interests, is holding its national conference in Nashville, Tennessee this April 16-19, 2023. Renters have repeatedly demanded that the NRHC, and the corporate landlords that lead it, adopt tenant protections in the homes they own and manage, due to their exploitative business practices.Tennessee has suffered first-hand the harms that can come from the proliferation of corporate-owned rental homes, and Nashville is a key target for the largest predatory landlords. Renters in corporate-owned properties have reported unfair rent hikes, shoddy maintenance, excessive fees, and more. Renters are organizing against evictions, as well as for limits on arbitrary rent increases, and the right to bargain collectively about living conditions

    Transformers Learn Shortcuts to Automata

    Full text link
    Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are learned by these shallow and non-recurrent models? We find that a low-depth Transformer can represent the computations of any finite-state automaton (thus, any bounded-memory algorithm), by hierarchically reparameterizing its recurrent dynamics. Our theoretical results characterize shortcut solutions, whereby a Transformer with o(T)o(T) layers can exactly replicate the computation of an automaton on an input sequence of length TT. We find that polynomial-sized O(logT)O(\log T)-depth solutions always exist; furthermore, O(1)O(1)-depth simulators are surprisingly common, and can be understood using tools from Krohn-Rhodes theory and circuit complexity. Empirically, we perform synthetic experiments by training Transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. We further investigate the brittleness of these solutions and propose potential mitigations

    Exposing Attention Glitches with Flip-Flop Language Modeling

    Full text link
    Why do large language models sometimes output factual inaccuracies and exhibit erroneous reasoning? The brittleness of these models, particularly when executing long chains of reasoning, currently seems to be an inevitable price to pay for their advanced capabilities of coherently synthesizing knowledge, pragmatics, and abstract thought. Towards making sense of this fundamentally unsolved problem, this work identifies and analyzes the phenomenon of attention glitches, in which the Transformer architecture's inductive biases intermittently fail to capture robust reasoning. To isolate the issue, we introduce flip-flop language modeling (FFLM), a parametric family of synthetic benchmarks designed to probe the extrapolative behavior of neural language models. This simple generative task requires a model to copy binary symbols over long-range dependencies, ignoring the tokens in between. We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques. Our preliminary mechanistic analyses show why the remaining errors may be very difficult to diagnose and resolve. We hypothesize that attention glitches account for (some of) the closed-domain hallucinations in natural LLMs.Comment: v2: NeurIPS 2023 camera-ready + data releas

    Neural Active Learning on Heteroskedastic Distributions

    Full text link
    Models that can actively seek out the best quality training data hold the promise of more accurate, adaptable, and efficient machine learning. State-of-the-art active learning techniques tend to prefer examples that are the most difficult to classify. While this works well on homogeneous datasets, we find that it can lead to catastrophic failures when performed on multiple distributions with different degrees of label noise or heteroskedasticity. These active learning algorithms strongly prefer to draw from the distribution with more noise, even if their examples have no informative structure (such as solid color images with random labels). To this end, we demonstrate the catastrophic failure of these active learning algorithms on heteroskedastic distributions and propose a fine-tuning-based approach to mitigate these failures. Further, we propose a new algorithm that incorporates a model difference scoring function for each data point to filter out the noisy examples and sample clean examples that maximize accuracy, outperforming the existing active learning techniques on the heteroskedastic datasets. We hope these observations and techniques are immediately helpful to practitioners and can help to challenge common assumptions in the design of active learning algorithms
    corecore