1,938 research outputs found

    Perceptions of Institutional Quality: Evidence of Limited Attention to Higher Education Rankings

    Get PDF
    Rankings of colleges and universities provide information about quality and potentially affect where prospective students send applications for admission. We find evidence of limited attention to the popular U.S. News and World Report rankings of America’s Best Colleges. We estimate that applications discontinuously drop by 2%–6% when the rank moves from inside the top 50 to outside the top 50 whereas there is no evidence of a corresponding discontinuous drop in institutional quality. Notably, the ranking of 50 corresponds to the first page cutoff of the printed U.S. News guides. The choice of college is typically a one-time decision with potentially large repercussions, so students’ limited attention to rankings likely represents an irrational bias that negatively affects welfare

    Spectral Embedding Norm: Looking Deep into the Spectrum of the Graph Laplacian

    Full text link
    The extraction of clusters from a dataset which includes multiple clusters and a significant background component is a non-trivial task of practical importance. In image analysis this manifests for example in anomaly detection and target detection. The traditional spectral clustering algorithm, which relies on the leading KK eigenvectors to detect KK clusters, fails in such cases. In this paper we propose the {\it spectral embedding norm} which sums the squared values of the first II normalized eigenvectors, where II can be significantly larger than KK. We prove that this quantity can be used to separate clusters from the background in unbalanced settings, including extreme cases such as outlier detection. The performance of the algorithm is not sensitive to the choice of II, and we demonstrate its application on synthetic and real-world remote sensing and neuroimaging datasets

    Forecasting loss given default with the nearest neighbor algorithm

    Get PDF
    Mestrado em Matemática FinanceiraNos últimos anos, a previsão do Loss Given Default (LGD) tem sido um dos principais desafios no âmbito da gestão do risco de crédito. Investigadores académicos e profissionais da indústria bancária têm-se dedicado ao estudo deste parâmetro de risco em particular. Apesar de todas as diferentes abordagens já desenvolvidas e publicadas até hoje, a previsão do LGD continua a ser um tema de estudo académico intenso e sobre o qual ainda não existe um "consenso" metodológico na banca. Este trabalho apresenta uma abordagem alternativa para a previsão do LGD baseada na utilização de um simples, mas intuitivo, algoritmo de Machine Learning: o algoritmo nearest neighbor. De forma a avaliar a perfomance desta técnica não paramétrica na previsão do LGD, são utilizadas determinadas métricas de avaliação que permitem a comparação com um modelo paramétrico mais convencional e com a utilização do LGD médio histórico.In recent years, forecasting Loss Given Default (LGD) has been a major challenge in the field of credit risk management. Practitioners and academic researchers have focused on the study of this particular risk dimension. Despite all different approaches that have been developed and published so far, it remains an area of intense academic study and with lack of consensual solutions in the banking industry. This paper presents an LGD forecasting approach based on a simple and intuitive Machine Learning algorithm: the nearest neighbor algorithm. In order to evaluate the performance of this non parametric technique, some proper evaluation metrics are used to compare it to a more ?classical? parametric model and to the use of historical recovery rates to predict LGD

    Instantaneous failure mode remaining useful life estimation using non-uniformly sampled measurements from a reciprocating compressor valve failure

    Get PDF
    One of the major targets in industry is minimisation of downtime and cost, and maximisation of availability and safety, with maintenance considered a key aspect in achieving this objective. The concept of Condition Based Maintenance and Prognostics and Health Management (CBM/PHM) , which is founded on the principles of diagnostics, and prognostics, is a step towards this direction as it offers a proactive means for scheduling maintenance. Reciprocating compressors are vital components in oil and gas industry, though their maintenance cost is known to be relatively high. Compressor valves are the weakest part, being the most frequent failing component, accounting for almost half maintenance cost. To date, there has been limited information on estimating Remaining Useful Life (RUL) of reciprocating compressor in the open literature. This paper compares the prognostic performance of several methods (multiple linear regression, polynomial regression, Self-Organising Map (SOM), K-Nearest Neighbours Regression (KNNR)), in relation to their accuracy and precision, using actual valve failure data captured from an operating industrial compressor. The SOM technique is employed for the first time as a standalone tool for RUL estimation. Furthermore, two variations on estimating RUL based on SOM and KNNR respectively are proposed. Finally, an ensemble method by combining the output of all aforementioned algorithms is proposed and tested. Principal components analysis and statistical process control were implemented to create T^2 and Q metrics, which were proposed to be used as health indicators reflecting degradation processes and were employed for direct RUL estimation for the first time. It was shown that even when RUL is relatively short due to instantaneous nature of failure mode, it is feasible to perform good RUL estimates using the proposed techniques

    CLUSTERING TECHNIQUES IN FINANCIAL DATA ANALYSIS APPLICATIONS ON THE U.S. FINANCIAL MARKET

    Get PDF
    In the economic and financial analysis, the need to classify companies in terms of categories, thedelimitation of which has to be clear and natural occurs frequently. The differentiation of companies bycategories is performed according to the economic and financial indicators which are associated to the above.The clustering algorithms are a very powerful tool in identifying the classes of companies based on theinformation provided by the indicators associated to them. The last decade imposed to the economic andfinancial practice the use of economic value added as an indicator of synthesis of the entire activity of acompany. Our study uses a sample of 106 companies in four different fields of activity; each company isidentified by: Economic Value Added, Net Income, Current Sales, Equity and Stock Price. Using the ascendinghierarchical classification methods and the partitioning classification methods, as well as Ward’s method and kmeansalgorithm, we identified on the considered sample an information structure consisting of 5 rating classes
    • …
    corecore