5,995 research outputs found

    A Statistical Analysis of Log-Periodic Precursors to Financial Crashes

    Full text link
    Motivated by the hypothesis that financial crashes are macroscopic examples of critical phenomena associated with a discrete scaling symmetry, we reconsider the evidence of log-periodic precursors to financial crashes and test the prediction that log-periodic oscillations in a financial index are embedded in the mean function of this index. In particular, we examine the first differences of the logarithm of the S&P 500 prior to the October 87 crash and find the log-periodic component of this time series is not statistically significant if we exclude the last year of data before the crash. We also examine the claim that two separate mechanisms are responsible for draw downs in the S&P 500 and find the evidence supporting this claim to be unconvincing.Comment: 26 pages, 10 figures, figures are incorporated into paper, some changes to the text have been mad

    Automated census record linking: a machine learning approach

    Full text link
    Thanks to the availability of new historical census sources and advances in record linking technology, economic historians are becoming big data genealogists. Linking individuals over time and between databases has opened up new avenues for research into intergenerational mobility, assimilation, discrimination, and the returns to education. To take advantage of these new research opportunities, scholars need to be able to accurately and efficiently match historical records and produce an unbiased dataset of links for downstream analysis. I detail a standard and transparent census matching technique for constructing linked samples that can be replicated across a variety of cases. The procedure applies insights from machine learning classification and text comparison to the well known problem of record linkage, but with a focus on the sorts of costs and benefits of working with historical data. I begin by extracting a subset of possible matches for each record, and then use training data to tune a matching algorithm that attempts to minimize both false positives and false negatives, taking into account the inherent noise in historical records. To make the procedure precise, I trace its application to an example from my own work, linking children from the 1915 Iowa State Census to their adult-selves in the 1940 Federal Census. In addition, I provide guidance on a number of practical questions, including how large the training data needs to be relative to the sample.This research has been supported by the NSF-IGERT Multidisciplinary Program in Inequality & Social Policy at Harvard University (Grant No. 0333403)

    On the equality of Hausdorff and box counting dimensions

    Full text link
    By viewing the covers of a fractal as a statistical mechanical system, the exact capacity of a multifractal is computed. The procedure can be extended to any multifractal described by a scaling function to show why the capacity and Hausdorff dimension are expected to be equal.Comment: CYCLER Paper 93mar001 Latex file with 3 PostScript figures (needs psfig.sty

    Phase shift in experimental trajectory scaling functions

    Full text link
    For one dimensional maps the trajectory scaling functions is invariant under coordinate transformations and can be used to compute any ergodic average. It is the most stringent test between theory and experiment, but so far it has proven difficult to extract from experimental data. It is shown that the main difficulty is a dephasing of the experimental orbit which can be corrected by reconstructing the dynamics from several time series. From the reconstructed dynamics the scaling function can be accurately extracted.Comment: CYCLER Paper 93mar008. LaTeX, LAUR-92-3053. Replaced with a version with all figure

    Is There a Real-Estate Bubble in the US?

    Full text link
    We analyze the quarterly average sale prices of new houses sold in the USA as a whole, in the northeast, midwest, south, and west of the USA, in each of the 50 states and the District of Columbia of the USA, to determine whether they have grown faster-than-exponential which we take as the diagnostic of a bubble. We find that 22 states (mostly Northeast and West) exhibit clear-cut signatures of a fast growing bubble. From the analysis of the S&P 500 Home Index, we conclude that the turning point of the bubble will probably occur around mid-2006.Comment: 7 Elsaet Latex pages + 9 eps figure

    Seeking Anonymity in an Internet Panopticon

    Full text link
    Obtaining and maintaining anonymity on the Internet is challenging. The state of the art in deployed tools, such as Tor, uses onion routing (OR) to relay encrypted connections on a detour passing through randomly chosen relays scattered around the Internet. Unfortunately, OR is known to be vulnerable at least in principle to several classes of attacks for which no solution is known or believed to be forthcoming soon. Current approaches to anonymity also appear unable to offer accurate, principled measurement of the level or quality of anonymity a user might obtain. Toward this end, we offer a high-level view of the Dissent project, the first systematic effort to build a practical anonymity system based purely on foundations that offer measurable and formally provable anonymity properties. Dissent builds on two key pre-existing primitives - verifiable shuffles and dining cryptographers - but for the first time shows how to scale such techniques to offer measurable anonymity guarantees to thousands of participants. Further, Dissent represents the first anonymity system designed from the ground up to incorporate some systematic countermeasure for each of the major classes of known vulnerabilities in existing approaches, including global traffic analysis, active attacks, and intersection attacks. Finally, because no anonymity protocol alone can address risks such as software exploits or accidental self-identification, we introduce WiNon, an experimental operating system architecture to harden the uses of anonymity tools such as Tor and Dissent against such attacks.Comment: 8 pages, 10 figure

    The vicious cycle: fundraising and perceived visibility in US presidential primaries

    Full text link
    Scholars of presidential primaries have long posited a dynamic positive feedback loop between fundraising and electoral success. Yet existing work on both directions of this feedback remains inconclusive and is often explicitly cross-sectional, ignoring the dynamic aspect of the hypothesis. Pairing high-frequency FEC data on contributions and expenditures with Iowa Electronic Markets data on perceived probability of victory, we examine the bidirectional feedback between contributions and viability. We find robust, significant positive feedback in both directions. This might suggest multiple equilibria: a candidate initially anointed as the front-runner able to sustain such status solely by the fundraising advantage conferred despite possessing no advantage in quality. However, simulations suggest the feedback loop cannot, by itself, sustain advantage. Given the observed durability of front-runners, it would thus seem there is either some other feedback at work and/or the process by which the initial front-runner is identified is informative of candidate quality

    The return to education in the mid-20th century: evidence from twins

    Full text link
    What was the return to education in the US at mid-century? In 1940, the correlation between years of schooling and earnings was relatively low. In this paper, we estimate the causal return to schooling in 1940, constructing a large linked sample of twin brothers to account for differences in unobserved ability and family background. We find that each additional year of schooling increased labor earnings by approximately 4%, about half the return found for more recent cohorts in contemporary twins studies. These returns were evident both within and across occupations and were higher for sons from lower SES families.First author draf

    Selfish Knapsack

    Full text link
    We consider a selfish variant of the knapsack problem. In our version, the items are owned by agents, and each agent can misrepresent the set of items she owns---either by avoiding reporting some of them (understating), or by reporting additional ones that do not exist (overstating). Each agent's objective is to maximize, within the items chosen for inclusion in the knapsack, the total valuation of her own chosen items. The knapsack problem, in this context, seeks to minimize the worst-case approximation ratio for social welfare at equilibrium. We show that a randomized greedy mechanism has attractive strategic properties: in general, it has a correlated price of anarchy of 22 (subject to a mild assumption). For overstating-only agents, it becomes strategyproof; we also provide a matching lower bound of 22 on the (worst-case) approximation ratio attainable by randomized strategyproof mechanisms, and show that no deterministic strategyproof mechanism can provide any constant approximation ratio. We also deal with more specialized environments. For the case of 22 understating-only agents, we provide a randomized strategyproof 5+427≈1.522\frac{5+4\sqrt{2}}{7} \approx 1.522-approximate mechanism, and a lower bound of 55−92≈1.09\frac{5\sqrt{5}-9}{2} \approx 1.09. When all agents but one are honest, we provide a deterministic strategyproof 1+52≈1.618\frac{1+\sqrt{5}}{2} \approx 1.618-approximate mechanism with a matching lower bound. Finally, we consider a model where agents can misreport their items' properties rather than existence. Specifically, each agent owns a single item, whose value-to-size ratio is publicly known, but whose actual value and size are not. We show that an adaptation of the greedy mechanism is strategyproof and 22-approximate, and provide a matching lower bound; we also show that no deterministic strategyproof mechanism can provide a constant approximation ratio
    • …
    corecore