12,005 research outputs found

    Efficiency of Financial Institutions: International Survey and Directions for Future Research

    Get PDF
    This paper surveys 130 studies that apply frontier efficiency analysis to financial institutions in 21 countries. The primary goals are to summarize and critically review empirical estimates of financial institution efficiency and to attempt to arrive at a consensus view. We find that the various efficiency methods do not necessarily yield consistent results and suggest some ways that these methods might be improved to bring about findings that are more consistent, accurate, and useful. Secondary goals are to address the implications of efficiency results for financial institutions in the areas of government policy, research, and managerial performance. Areas needing additional research are also outlined.

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Aeronautical Engineering: A special bibliography, supplement 60

    Get PDF
    This bibliography lists 284 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1975

    Advanced flight control system study

    Get PDF
    The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed

    The Dynamic Process of Tax Reform

    Get PDF
    The tax reform literature, pioneered by Guesnerie [1977], uses static models but views tax reform as a dynamic process, i.e., as a policy-maker implementing incremental reforms over time. This paper studies tax reform in a dynamic version of the Diamond-Mirrlees-Guesnerie model and focuses on a specific aspect of the dynamic process, namely, the implications for tax reform of agents leaving bequests. The main idea is that a tax reform in one period will affect bequests and therefore endowments, equilibrium, and welfare in subsequent periods. Thus, the process of tax reform cannot be analyzed as a sequence of static economies; instead, the economies are linked by bequests. The paper undertakes a tax reform analysis a la Guesnerie, but with an added focus on welfare improving reforms for each generation. Second-best Pareto optima are then characterized, and these conditions are compared to the static optimal tax formulae derived in the literature. In particular, the key Diamond-Mirrlees result that production efficiency is desirable at second-best optima no longer holds in the presence of (effective) restrictions on the taxation of private savings. Restrictions on government savings (including balanced budget restrictions), however, do not disturb the desirability of production efficiency. Finally, the effects of certain political constraints on the tax reform process are also considerDynamic tax reform, second-best Pareto optima, capital taxation

    Limits to parallelism in scientific computing

    Get PDF
    The goal of our research is to decrease the execution time of scientific computing applications. We exploit the application\u27s inherent parallelism to achieve this goal. This exploitation is expensive as we analyze sequential applications and port them to parallel computers. Many scientifically computational problems appear to have considerable exploitable parallelism; however, upon implementing a parallel solution on a parallel computer, limits to the parallelism are encountered. Unfortunately, many of these limits are characteristic of a specific parallel computer. This thesis explores these limits.;We study the feasibility of exploiting the inherent parallelism of four NASA scientific computing applications. We use simple models to predict each application\u27s degree of parallelism at several levels of granularity. From this analysis, we conclude that it is infeasible to exploit the inherent parallelism of two of the four applications. The interprocessor communication of one application is too expensive relative to its computation cost. The input and output costs of the other application are too expensive relative to its computation cost. We exploit the parallelism of the remaining two applications and measure their performance on an Intel iPSC/2 parallel computer. We parallelize an Optimal Control Boundary Value Problem. This guidance control problem determines an optimal trajectory of a boat in a river. We parallelize the Carbon Dioxide Slicing technique which is a macrophysical cloud property retrieval algorithm. This technique computes the height at the top of a cloud using cloud imager measurements. We consider the feasibility of exploiting its massive parallelism on a MasPar MP-2 parallel computer. We conclude that many limits to parallelism are surmountable while other limits are inescapable.;From these limits, we elucidate some fundamental issues that must be considered when porting similar problems to yet-to-be designed computers. We conclude that the technological improvements to reduce the isolation of computational units frees a programmer from many of the programmer\u27s current concerns about the granularity of the work. We also conclude that the technological improvements to relax the regimented guidance of the computational units allows a programmer to exploit the inherent heterogeneous parallelism of many applications
    corecore