3,710 research outputs found

    DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion

    Full text link
    Non-recurring traffic congestion is caused by temporary disruptions, such as accidents, sports games, adverse weather, etc. We use data related to real-time traffic speed, jam factors (a traffic congestion indicator), and events collected over a year from Nashville, TN to train a multi-layered deep neural network. The traffic dataset contains over 900 million data records. The network is thereafter used to classify the real-time data and identify anomalous operations. Compared with traditional approaches of using statistical or machine learning techniques, our model reaches an accuracy of 98.73 percent when identifying traffic congestion caused by football games. Our approach first encodes the traffic across a region as a scaled image. After that the image data from different timestamps is fused with event- and time-related data. Then a crossover operator is used as a data augmentation method to generate training datasets with more balanced classes. Finally, we use the receiver operating characteristic (ROC) analysis to tune the sensitivity of the classifier. We present the analysis of the training time and the inference time separately

    Aerospace medicine and biology. A continuing bibliography with indexes, supplement 195

    Get PDF
    This bibliography lists 148 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1979

    Bio-signal based control in assistive robots: a survey

    Get PDF
    Recently, bio-signal based control has been gradually deployed in biomedical devices and assistive robots for improving the quality of life of disabled and elderly people, among which electromyography (EMG) and electroencephalography (EEG) bio-signals are being used widely. This paper reviews the deployment of these bio-signals in the state of art of control systems. The main aim of this paper is to describe the techniques used for (i) collecting EMG and EEG signals and diving these signals into segments (data acquisition and data segmentation stage), (ii) dividing the important data and removing redundant data from the EMG and EEG segments (feature extraction stage), and (iii) identifying categories from the relevant data obtained in the previous stage (classification stage). Furthermore, this paper presents a summary of applications controlled through these two bio-signals and some research challenges in the creation of these control systems. Finally, a brief conclusion is summarized

    The 2007-2008 financial crisis: Is there evidence of disaster myopia?

    Get PDF
    Working paper GATE 2011-25The disaster myopia hypothesis is a theoretical argument that may explain why crises are a recurrent event. Under very optimistic circumstances, investors disregard any relevant information concerning the increasing degree of risk. Agents' propensity to underestimate the probability of adverse outcomes from the distant past increases the longer the period since that event occurred and at some point the subjective probability attached to this event reaches zero. This risky behaviour may contribute to the formation of a bubble that bursts into a crisis. This paper tests whether there is evidence of disaster myopia during the recent episode of financial crisis in the banking sector. Its contribution is twofold. First, it shows that the 2007 financial crisis exhibits disaster myopia in the banking sector. And second, it identifies macro and specific determinant variables in banks' risk taking since the beginning of the years 2000

    Consumer loans' first payment default detection: a predictive model

    Get PDF
    A default loan (also called nonperforming loan) occurs when there is a failure to meet bank conditions and repayment cannot be made in accordance with the terms of the loan which has reached its maturity. In this study, we provide a predictive analysis of the consumer behavior concerning a loan’s first payment default (FPD) using a real dataset of consumer loans with approximately 600,000 records from a bank. We use logistic regression, naive Bayes, support vector machine, and random forest on oversampled and undersampled data to build eight different models to predict FPD loans. A two-class random forest using undersampling yielded more than 86% on all performance measures: accuracy, precision, recall, and F1-score. The corresponding scores are even as high as 96% for oversampling. However, when tested on the real and balanced dataset, the performance of oversampling deteriorates as generating synthetic data for an extremely imbalanced dataset harms the training procedure of the algorithms. The study also provides an understanding of the reasons for nonperforming loans and helps to manage credit risks more consciously.WOS:000510459900012Scopus - Affiliation ID: 60105072TR - DizinScience Citation Index ExpandedQ4ArticleUluslararası işbirliği ile yapılmayan - HAYIROcak2020YÖK - 2019-2
    corecore