1,008 research outputs found

    What explains high unemployment? The aggregate demand channel

    Get PDF
    A drop in aggregate demand driven by shocks to household balance sheets is responsible for a large fraction of the decline in U.S. employment from 2007 to 2009. The aggregate demand channel for unemployment predicts that employment losses in the non-tradable sector are higher in high leverage U.S. counties that were most severely impacted by the balance sheet shock, while losses in the tradable sector are distributed uniformly across all counties. We find exactly this pattern from 2007 to 2009. Alternative hypotheses for job losses based on uncertainty shocks or structural unemployment related to construction do not explain our results. Using the relation between non-tradable sector job losses and demand shocks and assuming Cobb-Douglas preferences over tradable and non-tradable goods, we quantify the effect of aggregate demand channel on total employment. Our estimates suggest that the decline in aggregate demand driven by household balance sheet shocks accounts for almost 4 million of the lost jobs from 2007 to 2009, or 65% of the lost jobs in our data.

    Resolving Debt Overhang: Political Constraints in the Aftermath of Financial Crises

    Get PDF
    Debtors bear the brunt of a decline in asset prices associated with financial crises and policies aimed at partial debt relief may be warranted to boost growth in the midst of crises. Drawing on the US experience during the Great Recession of 2008-09 and historical evidence in a large panel of countries, we explore why the political system may fail to deliver such policies. We find that during the Great Recession creditors were able to use the political system more effectively to protect their interests through bailouts. More generally we show that politically countries become more polarized and fractionalized following financial crises. This results in legislative stalemate, making it less likely that crises lead to meaningful macroeconomic reforms.

    Adaptive Channel Coding and Modulation Scheme Selection for Achieving High Throughput in Wireless Networks

    Get PDF
    Modern wireless communication demands reliable data communication at high throughput in severe channel conditions like narrowband interference, frequency selective fading due to multipath and attenuation of high frequencies. Traditional single carrier systems address this set of problems by the use of complex, computationally intensive equalization filters. The Orthogonal Frequency Division Multiplexing (OFDM) based system, as opposed to single-carrier systems, is considered to be the future of the wireless communication and is being used to achieve high data rate by overcoming severe channel conditions without the use of these complex filters.This paper discusses the problem of Adaptive Modulation scheme selection through an OFDM based system over parallel frequency selective fading channels. An adaptive coding scheme is proposed by using Generalized Concatenated Codes (GCC), which have simple structure and are designed in such a way that they are best suited for fading channels. GCC are based on binary cyclic codes. The criterion of the proposed research is to optimize the throughput of a wireless system. Depending on the quality of sub-channels an adaptive modulation selection scheme and code assigning method is proposed. The proposed research combats against channel impairments better than those used in conventional systems by exploiting individual sub-channel condition. Results show better performance in terms of higher throughput by minimizing the bit error rate

    Household Leverage and the Recession of 2007 to 2009

    Get PDF
    We show that household leverage as of 2006 is a powerful statistical predictor of the severity of the 2007 to 2009 recession across U.S. counties. Counties in the U.S. that experienced a large increase in household leverage from 2002 to 2006 showed a sharp relative decline in durable consumption starting in the third quarter of 2006 – a full year before the official beginning of the recession in the fourth quarter of 2007. Similarly, counties with the highest reliance on credit card borrowing reduced durable consumption by significantly more following the financial crisis of the fall of 2008. Overall, our statistical model shows that household leverage growth and dependence on credit card borrowing as of 2006 explain a large fraction of the overall consumer default, house price, unemployment, residential investment, and durable consumption patterns during the recession. Our findings suggest that a focus on household finance may help elucidate the sources macroeconomic fluctuations.

    House Prices, Home Equity-Based Borrowing, and the U.S. Household Leverage Crisis

    Get PDF
    Using individual-level data on homeowner debt and defaults from 1997 to 2008, we show that borrowing against the increase in home equity by existing homeowners is responsible for a significant fraction of both the sharp rise in U.S. household leverage from 2002 to 2006 and the increase in defaults from 2006 to 2008. Employing land topology-based housing supply elasticity as an instrument for house price growth, we estimate that the average homeowner extracts 25 to 30 cents for every dollar increase in home equity. Money extracted from increased home equity is not used to purchase new real estate or pay down high credit card balances, which suggests that borrowed funds may be used for real outlays (i.e., consumption or home improvement). Home equity-based borrowing is stronger for younger households, households with low credit scores, and households with high initial credit card utilization rates. Homeowners in high house price appreciation areas experience a relative decline in default rates from 2002 to 2006 as they borrow heavily against their home equity, but experience very high default rates from 2006 to 2008. Our estimates suggest that home equity-based borrowing is equal to 2.8% of GDP every year from 2002 to 2006, and accounts for at least 34% of new defaults from 2006 to 2008.

    Ada-WHIPS: explaining AdaBoost classification with applications in the health sciences

    Get PDF
    Background Computer Aided Diagnostics (CAD) can support medical practitioners to make critical decisions about their patients’ disease conditions. Practitioners require access to the chain of reasoning behind CAD to build trust in the CAD advice and to supplement their own expertise. Yet, CAD systems might be based on black box machine learning models and high dimensional data sources such as electronic health records, magnetic resonance imaging scans, cardiotocograms, etc. These foundations make interpretation and explanation of the CAD advice very challenging. This challenge is recognised throughout the machine learning research community. eXplainable Artificial Intelligence (XAI) is emerging as one of the most important research areas of recent years because it addresses the interpretability and trust concerns of critical decision makers, including those in clinical and medical practice. Methods In this work, we focus on AdaBoost, a black box model that has been widely adopted in the CAD literature. We address the challenge – to explain AdaBoost classification – with a novel algorithm that extracts simple, logical rules from AdaBoost models. Our algorithm, Adaptive-Weighted High Importance Path Snippets (Ada-WHIPS), makes use of AdaBoost’s adaptive classifier weights. Using a novel formulation, Ada-WHIPS uniquely redistributes the weights among individual decision nodes of the internal decision trees of the AdaBoost model. Then, a simple heuristic search of the weighted nodes finds a single rule that dominated the model’s decision. We compare the explanations generated by our novel approach with the state of the art in an experimental study. We evaluate the derived explanations with simple statistical tests of well-known quality measures, precision and coverage, and a novel measure stability that is better suited to the XAI setting. Results Experiments on 9 CAD-related data sets showed that Ada-WHIPS explanations consistently generalise better (mean coverage 15%-68%) than the state of the art while remaining competitive for specificity (mean precision 80%-99%). A very small trade-off in specificity is shown to guard against over-fitting which is a known problem in the state of the art methods. Conclusions The experimental results demonstrate the benefits of using our novel algorithm for explaining CAD AdaBoost classifiers widely found in the literature. Our tightly coupled, AdaBoost-specific approach outperforms model-agnostic explanation methods and should be considered by practitioners looking for an XAI solution for this class of models

    CHIRPS: Explaining random forest classification

    Get PDF
    Modern machine learning methods typically produce “black box” models that are opaque to interpretation. Yet, their demand has been increasing in the Human-in-the-Loop pro-cesses, that is, those processes that require a human agent to verify, approve or reason about the automated decisions before they can be applied. To facilitate this interpretation, we propose Collection of High Importance Random Path Snippets (CHIRPS); a novel algorithm for explaining random forest classification per data instance. CHIRPS extracts a decision path from each tree in the forest that contributes to the majority classification, and then uses frequent pattern mining to identify the most commonly occurring split conditions. Then a simple, conjunctive form rule is constructed where the antecedent terms are derived from the attributes that had the most influence on the classification. This rule is returned alongside estimates of the rule’s precision and coverage on the training data along with counter-factual details. An experimental study involving nine data sets shows that classification rules returned by CHIRPS have a precision at least as high as the state of the art when evaluated on unseen data (0.91–0.99) and offer a much greater coverage (0.04–0.54). Furthermore, CHIRPS uniquely controls against under- and over-fitting solutions by maximising novel objective functions that are better suited to the local (per instance) explanation setting

    An architectural selection framework for data fusion in sensor platforms

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, February 2007.Includes bibliographical references (leaves 97-100).The role of data fusion in sensor platforms is becoming increasingly important in various domains of science, technology and business. Fusion pertains to the merging or integration of information towards an enhanced level of awareness. This thesis provides a canonical overview of several major fusion architectures developed from the remote sensing and defense community. Additionally, it provides an assessment of current sensors and their platforms, the influence of reliability measures, and the connection to fusion applications. We present several types of architecture for managing multi-sensor data fusion, specifically as they relate to the tracking-correlation function and blackboard processing representations in knowledge engineering. Object-Process Methods are used to model the information fusion process and supporting systems. Several mathematical techniques are shown to be useful in the fusion of numerical properties, sensor data updating and the implementation of unique detection probabilities. Finally, we discuss the importance of fusion to the concept and operation of the Semantic Web, which promises new ways to exploit the synergy of multi-sensor data platforms. This requires the synthesis of fusion with ontology models for knowledge representation. We discuss the importance of fusion as a reuse process in ontological engineering, and review key lifecycle models in ontology development. The evolutionary approach to ontology development is considered the most useful and adaptable to the complexities of semantic networks. Several potential applications for data fusion are screened and ranked according to the Joint Directors of Laboratories (JDL) process model for information fusion. Based on these predetermined criteria, the case of medical diagnostic imaging was found to offer the most promising applications for fusion, on which future product platforms can be built.by Atif R. Mirza.S.M

    Source Localization Using Virtual Antenna Arrays

    Get PDF
    Using antenna arrays for direction of arrival (DoA) estimation and source localization is a well-researched topic. In this paper, we analyze virtual antenna arrays for DoA estimation where the antenna array geometry is acquired using data from a low-cost inertial measurement unit (IMU). Performance evaluation of an unaided inertial navigation system with respect to individual IMU sensor noise parameters is provided using a state space based extended Kalman filter. Secondly, using Monte Carlo simulations, DoA estimation performance of random 3-D antenna arrays is evaluated by computing Cramér-Rao lower bound values for a single plane wave source located in the far field of the array. Results in the paper suggest that larger antenna arrays can provide significant gain in DoA estimation accuracy, but, noise in the rate gyroscope measurements proves to be a limiting factor when making virtual antenna arrays for DoA estimation and source localization using single antenna devices
    corecore