160 research outputs found

    Essays on Benchmarking and Shadow Pricing

    Get PDF
    In the first chapter, we examine the performance of Chinese commercial banks before, during, and after the 2008 global financial crisis and the 2008--2010 China\u27s 4 trillion Renminbi stimulus plan. Fully nonparametric methods are used to estimate technical efficiencies. Recently-developed statistical results are used to test for changes in efficiencies as well as productivity over time, and to test for changes in technology over time. We also test for differences in efficiency and productivity between big and small banks, and between domestic and foreign banks. We find evidence of the non-convexity of banks\u27 production set. The data reveal that technical efficiency declined at the start of the global financial crisis (2007--2008) and after the China\u27s stimulus plan (2010--2011), but recovered in the years later (2011--2013), and declined again from 2013 to 2014, ending lower in 2014 than in 2007. We find that productivity declined during and just after the China\u27s stimulus plan (2009--2011), but recovered in the years later (2013--2014), ending lower in 2014 than in 2007. We also find that the technology shifted downward from 2012 to 2013, and then shifted upward from 2013 to 2014. Over the period 2007--2014, technology shifted upward. We provide evidence that in general big banks were more efficient and productive than small banks. Finally, domestic banks had higher efficiency and productivity than foreign banks over this period except in 2008. In the second chapter, I estimate shadow price of equity for U.S. commercial banks over 2001--2018 using nonparametric estimators of the underlying cost frontier and tests the existence of ``Too-Big-to-Fail\u27\u27 (TBTF) banks. Evidence on the existence of TBTF banks are found. Specifically, I find that a negative correlation exists between the shadow price of equity and the size of banks in each year, suggesting that big banks pay less in equity than small banks. In addition, in each year there are more banks with a negative shadow price of equity in the fourth quartile based on total assets than in the other three quartiles. The data also reveal that for each year, the estimated mean shadow price of equity for the top 100100 largest banks is smaller than the mean price of deposits, even though equity is commonly viewed as a riskier asset than deposits. Finally, I find that the top 1010 largest banks are willing to pay much more at the start of the global financial crisis and after the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 than the other periods. These results imply that these regulations are effective in reducing the implicit subsidy, at least for the top 1010 largest banks. However, it is also evident that the recapitalization has imposed significant equity funding costs for the top 1010 largest banks. In the third chapter, we examine the performance of 144 countries in the world before, during, and after the 2007--2012 global financial crisis. Fully nonparametric methods are used to estimate technical efficiencies. Recently-developed statistical results are used to test for changes in efficiencies as well as productivity over time, and to test for changes in technology over time. We also test for these differences between developing and developed countries. We find evidence of the non-convexity of countries\u27 production set. The data revealed that technical efficiency declined at the start of the global financial crisis (2006--2008), but recovered in the years later (2008--2014), ending higher in 2014 than in 2004. We also find that mean productivity continued decreasing from 2004 to 2010. Moreover, productivity in 2004 stochastic dominants in the first order that in 2014. Statistical tests indicate that the frontier continued shifting downward from 2004 to 2010, and then continued shifting upward from 2010 to 2014. Overall, the technology has shifted downward from 2004 to 2014. Finally, we provide evidence that developing economies have lower technical efficiency but higher productivity than developed economies over this period

    Methylprednisolone as Adjunct to Endovascular Thrombectomy for Large-Vessel Occlusion Stroke

    Get PDF
    Importance It is uncertain whether intravenous methylprednisolone improves outcomes for patients with acute ischemic stroke due to large-vessel occlusion (LVO) undergoing endovascular thrombectomy. Objective To assess the efficacy and adverse events of adjunctive intravenous low-dose methylprednisolone to endovascular thrombectomy for acute ischemic stroke secondary to LVO. Design, Setting, and Participants This investigator-initiated, randomized, double-blind, placebo-controlled trial was implemented at 82 hospitals in China, enrolling 1680 patients with stroke and proximal intracranial LVO presenting within 24 hours of time last known to be well. Recruitment took place between February 9, 2022, and June 30, 2023, with a final follow-up on September 30, 2023.InterventionsEligible patients were randomly assigned to intravenous methylprednisolone (n = 839) at 2 mg/kg/d or placebo (n = 841) for 3 days adjunctive to endovascular thrombectomy. Main Outcomes and Measures The primary efficacy outcome was disability level at 90 days as measured by the overall distribution of the modified Rankin Scale scores (range, 0 [no symptoms] to 6 [death]). The primary safety outcomes included mortality at 90 days and the incidence of symptomatic intracranial hemorrhage within 48 hours. Results Among 1680 patients randomized (median age, 69 years; 727 female [43.3%]), 1673 (99.6%) completed the trial. The median 90-day modified Rankin Scale score was 3 (IQR, 1-5) in the methylprednisolone group vs 3 (IQR, 1-6) in the placebo group (adjusted generalized odds ratio for a lower level of disability, 1.10 [95% CI, 0.96-1.25]; P = .17). In the methylprednisolone group, there was a lower mortality rate (23.2% vs 28.5%; adjusted risk ratio, 0.84 [95% CI, 0.71-0.98]; P = .03) and a lower rate of symptomatic intracranial hemorrhage (8.6% vs 11.7%; adjusted risk ratio, 0.74 [95% CI, 0.55-0.99]; P = .04) compared with placebo. Conclusions and Relevance Among patients with acute ischemic stroke due to LVO undergoing endovascular thrombectomy, adjunctive methylprednisolone added to endovascular thrombectomy did not significantly improve the degree of overall disability.Trial RegistrationChiCTR.org.cn Identifier: ChiCTR210005172

    Evidence from Shadow Price of Equity on 'Too-Big-to-Fail' Banks

    No full text

    On Ξ-pairs for maximal subgroups

    No full text

    Further Improvements of Finite Sample Approximation of Central Limit Theorems for Envelopment Estimators

    No full text
    A simple yet easy to implement method is proposed to further improve the finite sample approximation of the recently developed central limit theorems for aggregates of envelopment estimators. Focusing on the simple mean efficiency, we propose using the bias-corrected individual efficiency estimate to improve the variance estimator. The extensive Monte-Carlo experiments confirm that, for relatively small sample sizes (≀ 100), with both low dimensions and especially for high dimensions, our new method combined with the data sharpening method generally provides better ‘coverage’ (of the true values by the estimated confidence intervals) than the previously developed approaches

    Statistical Inference for Hicks–Moorsteen Productivity Indices

    No full text
    The statistical framework for the Malmquist productivity index (MPI) is now welldeveloped and emphasizes the importance of developing such a framework for its alternatives. We try to fill this gap in the literature for another popular measure, known as Hicks–Moorsteen Productivity Index (HMPI). Unlike MPI, the HMPI has a total factor productivity interpretation in the sense of measuring productivity as the ratio of aggregated outputs to aggregated inputs and has other useful advantages over MPI. In this work, we develop a novel framework for statistical inference for HMPI in various contexts: when its components are known or when they are replaced with nonparametric envelopment estimators. This will be done for a particular firm’s HMPI as well as for the simple mean (unweighted) HMPI and the aggregate (weighted) HMPI. Our results further enrich the recent theoretical developments of nonparametric envelopment estimators for the various efficiency and productivity measures. We examine the performance of these theoretical results for the unweighted and weighted mean of HMPI using Monte-Carlo simulations and also provide an empirical illustration

    Inference for Aggregate Efficiency: Theory and Guidelines for Practitioners

    No full text
    We expand the recently developed framework for the inference for aggregate efficiency, by extending the existing theory and providing guidelines for practitioners. In particular, we develop the central limit theorems (CLTs) for aggregate input-oriented efficiency, analogous to the output-oriented framework established by Simar and Zelenyuk (2018). To further improve the finite sample performance of the developed CLTs, we propose a simple yet easy to implement method through using the bias- corrected individual efficiency estimate to improve the variance estimator. The extensive Monte-Carlo experiments confirmed the developed CLTs for aggregate input- oriented efficiency and also confirmed the better performance of our proposed method in the finite sample sizes. Finally, we use two well-known empirical data sets to illustrate the differences between all the existing methods to facilitate the use by practitioners

    Further Improvements of Finite Sample Approximation of Central Limit Theorems for Envelopment Estimators

    No full text
    A simple yet easy to implement method is proposed to further improve the finite sample approximation of the recently developed central limit theorems for aggregates of envelopment estimators. Focusing on the simple mean efficiency, we propose using the bias-corrected individual efficiency estimate to improve the variance estimator. The extensive Monte-Carlo experiments confirm that, for relatively small sample sizes (≀ 100), with both low dimensions and especially for high dimensions, our new method combined with the data sharpening method generally provides better ‘coverage’ (of the true values by the estimated confidence intervals) than the previously developed approaches

    Covariate-Adjusted Regression for Distorted Longitudinal Data With Informative Observation Times

    No full text
    <p>In many longitudinal studies, repeated response and predictors are not directly observed, but can be treated as distorted by unknown functions of a common confounding covariate. Moreover, longitudinal data involve an observation process which may be informative with a longitudinal response process in practice. To deal with such complex data, we propose a class of flexible semiparametric covariate-adjusted joint models. The new models not only allow for the longitudinal response to be correlated with observation times through latent variables and completely unspecified link functions, but they also characterize distorted longitudinal response and predictors by unknown multiplicative factors depending on time and a confounding covariate. For estimation of regression parameters in the proposed models, we develop a novel covariate-adjusted estimating equation approach which does not rely on forms of link functions and distributions of frailties. The asymptotic properties of resulting parameter estimators are established and examined by simulation studies. A longitudinal data example containing calcium absorption and intake measurements is provided for illustration. Supplementary materials for this article are available online.</p
    • 

    corecore