2,592 research outputs found

    Optimal selection of reduced rank estimators of high-dimensional matrices

    Full text link
    We introduce a new criterion, the Rank Selection Criterion (RSC), for selecting the optimal reduced rank estimator of the coefficient matrix in multivariate response regression models. The corresponding RSC estimator minimizes the Frobenius norm of the fit plus a regularization term proportional to the number of parameters in the reduced rank model. The rank of the RSC estimator provides a consistent estimator of the rank of the coefficient matrix; in general, the rank of our estimator is a consistent estimate of the effective rank, which we define to be the number of singular values of the target matrix that are appropriately large. The consistency results are valid not only in the classic asymptotic regime, when nn, the number of responses, and pp, the number of predictors, stay bounded, and mm, the number of observations, grows, but also when either, or both, nn and pp grow, possibly much faster than mm. We establish minimax optimal bounds on the mean squared errors of our estimators. Our finite sample performance bounds for the RSC estimator show that it achieves the optimal balance between the approximation error and the penalty term. Furthermore, our procedure has very low computational complexity, linear in the number of candidate models, making it particularly appealing for large scale problems. We contrast our estimator with the nuclear norm penalized least squares (NNP) estimator, which has an inherently higher computational complexity than RSC, for multivariate regression models. We show that NNP has estimation properties similar to those of RSC, albeit under stronger conditions. However, it is not as parsimonious as RSC. We offer a simple correction of the NNP estimator which leads to consistent rank estimation.Comment: Published in at http://dx.doi.org/10.1214/11-AOS876 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org) (some typos corrected

    Asymptotics in Minimum Distance from Independence Estimation

    Get PDF
    In this paper we introduce a family of minimum distance from independence estimators, suggested by Manski's minimum mean square from independence estimator. We establish strong consistency, asymptotic normality and consistency of resampling estimates of the distribution and variance of these estimators. For Manski's estimator we derive both strong consistency and asymptotic normality.Donsker class, empirical processes, extremum estimator, nonlinear simultaneous equations models, resampling estimators

    Weighted Minimum Mean-Square Distance from Independence Estimation

    Get PDF
    In this paper we introduce a family of semi-parametric estimators, suggested by Manski's minimum mean-square distance from independence estimator. We establish the strong consistency, asymptotic normality and consistency of bootstrap estimates of the sampling distribution and the asymptotic variance of these estimators.Semiparametric estimation, simultaneous equations models, empirical processes, extremum estimators

    Sub-Chandrasekhar White Dwarf Mergers as the Progenitors of Type Ia Supernovae

    Get PDF
    Type Ia supernovae (SNe Ia) are generally thought to be due to the thermonuclear explosions of carbon–oxygen white dwarfs (COWDs) with masses near the Chandrasekhar mass. This scenario, however, has two long-standing problems. First, the explosions do not naturally produce the correct mix of elements, but have to be finely tuned to proceed from subsonic deflagration to supersonic detonation. Second, population models and observations give formation rates of near-Chandrasekhar WDs that are far too small. Here, we suggest that SNe Ia instead result from mergers of roughly equal-mass CO WDs, including those that produce sub-Chandrasekhar mass remnants. Numerical studies of such mergers have shown that the remnants consist of rapidly rotating cores that contain most of the mass and are hottest in the center, surrounded by dense, small disks. We argue that the disks accrete quickly, and that the resulting compressional heating likely leads to central carbon ignition. This ignition occurs at densities for which pure detonations lead to events similar to SNe Ia. With this merger scenario, we can understand the type Ia rates and have plausible reasons for the observed range in luminosity and for the bias of more luminous supernovae toward younger populations. We speculate that explosions of WDs slowly brought to the Chandrasekhar limit—which should also occur—are responsible for some of the “atypical” SNe Ia

    The infrared counterpart to the magnetar 1RXS J170849.0-400910

    Full text link
    We have analyzed both archival and new infrared imaging observations of the field of the Anomalous X-ray Pulsar 1RXS J170849.0-400910, in search of the infrared counterpart. This field has been previously investigated, and one of the sources consistent with the position of the AXP suggested as the counterpart. We, however, find that this object is more likely a background star, while another object within the positional error circle has non-stellar colors and shows evidence for variability. These two pieces of evidence, along with a consistency argument for the X-ray-to-infrared flux ratio, point to the second source being the more likely infrared counterpart to the AXP.Comment: 19 pages AASTEX, 4 figure. Accepted for publication in ApJ. Full resolution figures at: http://www.astro.utoronto.ca/~durant/1708.ps.g

    Aggregation for Gaussian regression

    Full text link
    This paper studies statistical aggregation procedures in the regression setting. A motivating factor is the existence of many different methods of estimation, leading to possibly competing estimators. We consider here three different types of aggregation: model selection (MS) aggregation, convex (C) aggregation and linear (L) aggregation. The objective of (MS) is to select the optimal single estimator from the list; that of (C) is to select the optimal convex combination of the given estimators; and that of (L) is to select the optimal linear combination of the given estimators. We are interested in evaluating the rates of convergence of the excess risks of the estimators obtained by these procedures. Our approach is motivated by recently published minimax results [Nemirovski, A. (2000). Topics in non-parametric statistics. Lectures on Probability Theory and Statistics (Saint-Flour, 1998). Lecture Notes in Math. 1738 85--277. Springer, Berlin; Tsybakov, A. B. (2003). Optimal rates of aggregation. Learning Theory and Kernel Machines. Lecture Notes in Artificial Intelligence 2777 303--313. Springer, Heidelberg]. There exist competing aggregation procedures achieving optimal convergence rates for each of the (MS), (C) and (L) cases separately. Since these procedures are not directly comparable with each other, we suggest an alternative solution. We prove that all three optimal rates, as well as those for the newly introduced (S) aggregation (subset selection), are nearly achieved via a single ``universal'' aggregation procedure. The procedure consists of mixing the initial estimators with weights obtained by penalized least squares. Two different penalties are considered: one of them is of the BIC type, the second one is a data-dependent â„“1\ell_1-type penalty.Comment: Published in at http://dx.doi.org/10.1214/009053606000001587 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Towards a flexible service integration through separation of business rules

    Get PDF
    Driven by dynamic market demands, enterprises are continuously exploring collaborations with others to add value to their services and seize new market opportunities. Achieving enterprise collaboration is facilitated by Enterprise Application Integration and Business-to-Business approaches that employ architectural paradigms like Service Oriented Architecture and incorporate technological advancements in networking and computing. However, flexibility remains a major challenge related to enterprise collaboration. How can changes in demands and opportunities be reflected in collaboration solutions with minimum time and effort and with maximum reuse of existing applications? This paper proposes an approach towards a more flexible integration of enterprise applications in the context of service mediation. We achieve this by combining goal-based, model-driven and serviceoriented approaches. In particular, we pay special attention to the separation of business rules from the business process of the integration solution. Specifying the requirements as goal models, we separate those parts which are more likely to evolve over time in terms of business rules. These business rules are then made executable by exposing them as Web services and incorporating them into the design of the business process.\ud Thus, should the business rules change, the business process remains unaffected. Finally, this paper also provides an evaluation of the flexibility of our solution in relation to the current work in business process flexibility research
    • …
    corecore