60,176 research outputs found

    Estimation of the density of regression errors

    Full text link
    Estimation of the density of regression errors is a fundamental issue in regression analysis and it is typically explored via a parametric approach. This article uses a nonparametric approach with the mean integrated squared error (MISE) criterion. It solves a long-standing problem, formulated two decades ago by Mark Pinsker, about estimation of a nonparametric error density in a nonparametric regression setting with the accuracy of an oracle that knows the underlying regression errors. The solution implies that, under a mild assumption on the differentiability of the design density and regression function, the MISE of a data-driven error density estimator attains minimax rates and sharp constants known for the case of directly observed regression errors. The result holds for error densities with finite and infinite supports. Some extensions of this result for more general heteroscedastic models with possibly dependent errors and predictors are also obtained; in the latter case the marginal error density is estimated. In all considered cases a blockwise-shrinking Efromovich--Pinsker density estimate, based on plugged-in residuals, is used. The obtained results imply a theoretical justification of a customary practice in applied regression analysis to consider residuals as proxies for underlying regression errors. Numerical and real examples are presented and discussed, and the S-PLUS software is available.Comment: Published at http://dx.doi.org/10.1214/009053605000000435 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Data modelling for emergency response

    Get PDF
    Emergency response is one of the most demanding phases in disaster management. The fire brigade, paramedics, police and municipality are the organisations involved in the first response to the incident. They coordinate their work based on welldefined policies and procedures, but they also need the most complete and up-todate information about the incident, which would allow a reliable decision-making.\ud There is a variety of systems answering the needs of different emergency responders, but they have many drawbacks: the systems are developed for a specific sector; it is difficult to exchange information between systems; the systems offer too much or little information, etc. Several systems have been developed to share information during emergencies but usually they maintain the nformation that is coming from field operations in an unstructured way.\ud This report presents a data model for organisation of dynamic data (operational and situational data) for emergency response. The model is developed within the RGI-239 project ‘Geographical Data Infrastructure for Disaster Management’ (GDI4DM)

    Reading a Protoevangelium in the Context of Genesis

    Get PDF
    This article proposes that the case for a ‘messianic’ reading of Gen. 3:15 is cumulative. No single individual argument is decisive and it is virtually impossible to sustain a robust protevangelium interpretation of this text within the context of Gen. 3 alone. However, as already pointed out in the introduction, isolating Gen. 3 from its literary/historical context in the book of Genesis does not lead to a fruitful resolution of its meaning but at best creates a hypothetical reconstructed meaning behind the text which becomes difficult to sustain in light of the interpretation of the \u27seed\u27 in the entire book. Though the lexical evidence by itself is somewhat ambiguous, the individual meaning for the term ‘seed’ is certainly plausible as demonstrated by its usage within the book of Genesis and in the rest of the Hebrew Bible. Further, when the text is read in the context of the first and second toledots in the Primeval History, not to mention in light of the macro-toledot structure of the entire book of Genesis we would agree with T.D. Alexander’s statement that in the “in the light of Genesis as a whole, a messianic reading of this verse is not only possible but highly probable

    On Network Coding Capacity - Matroidal Networks and Network Capacity Regions

    Get PDF
    One fundamental problem in the field of network coding is to determine the network coding capacity of networks under various network coding schemes. In this thesis, we address the problem with two approaches: matroidal networks and capacity regions. In our matroidal approach, we prove the converse of the theorem which states that, if a network is scalar-linearly solvable then it is a matroidal network associated with a representable matroid over a finite field. As a consequence, we obtain a correspondence between scalar-linearly solvable networks and representable matroids over finite fields in the framework of matroidal networks. We prove a theorem about the scalar-linear solvability of networks and field characteristics. We provide a method for generating scalar-linearly solvable networks that are potentially different from the networks that we already know are scalar-linearly solvable. In our capacity region approach, we define a multi-dimensional object, called the network capacity region, associated with networks that is analogous to the rate regions in information theory. For the network routing capacity region, we show that the region is a computable rational polytope and provide exact algorithms and approximation heuristics for computing the region. For the network linear coding capacity region, we construct a computable rational polytope, with respect to a given finite field, that inner bounds the linear coding capacity region and provide exact algorithms and approximation heuristics for computing the polytope. The exact algorithms and approximation heuristics we present are not polynomial time schemes and may depend on the output size.Comment: Master of Engineering Thesis, MIT, September 2010, 70 pages, 10 figure

    Strongly polynomial algorithm for a class of minimum-cost flow problems with separable convex objectives

    Get PDF
    A well-studied nonlinear extension of the minimum-cost flow problem is to minimize the objective ∑ij∈ECij(fij)\sum_{ij\in E} C_{ij}(f_{ij}) over feasible flows ff, where on every arc ijij of the network, CijC_{ij} is a convex function. We give a strongly polynomial algorithm for the case when all CijC_{ij}'s are convex quadratic functions, settling an open problem raised e.g. by Hochbaum [1994]. We also give strongly polynomial algorithms for computing market equilibria in Fisher markets with linear utilities and with spending constraint utilities, that can be formulated in this framework (see Shmyrev [2009], Devanur et al. [2011]). For the latter class this resolves an open question raised by Vazirani [2010]. The running time is O(m4log⁡m)O(m^4\log m) for quadratic costs, O(n4+n2(m+nlog⁡n)log⁡n)O(n^4+n^2(m+n\log n)\log n) for Fisher's markets with linear utilities and O(mn3+m2(m+nlog⁡n)log⁡m)O(mn^3 +m^2(m+n\log n)\log m) for spending constraint utilities. All these algorithms are presented in a common framework that addresses the general problem setting. Whereas it is impossible to give a strongly polynomial algorithm for the general problem even in an approximate sense (see Hochbaum [1994]), we show that assuming the existence of certain black-box oracles, one can give an algorithm using a strongly polynomial number of arithmetic operations and oracle calls only. The particular algorithms can be derived by implementing these oracles in the respective settings

    Cleaning large correlation matrices: tools from random matrix theory

    Full text link
    This review covers recent results concerning the estimation of large covariance matrices using tools from Random Matrix Theory (RMT). We introduce several RMT methods and analytical techniques, such as the Replica formalism and Free Probability, with an emphasis on the Marchenko-Pastur equation that provides information on the resolvent of multiplicatively corrupted noisy matrices. Special care is devoted to the statistics of the eigenvectors of the empirical correlation matrix, which turn out to be crucial for many applications. We show in particular how these results can be used to build consistent "Rotationally Invariant" estimators (RIE) for large correlation matrices when there is no prior on the structure of the underlying process. The last part of this review is dedicated to some real-world applications within financial markets as a case in point. We establish empirically the efficacy of the RIE framework, which is found to be superior in this case to all previously proposed methods. The case of additively (rather than multiplicatively) corrupted noisy matrices is also dealt with in a special Appendix. Several open problems and interesting technical developments are discussed throughout the paper.Comment: 165 pages, article submitted to Physics Report

    Recognising the Suzuki groups in their natural representations

    Full text link
    Under the assumption of a certain conjecture, for which there exists strong experimental evidence, we produce an efficient algorithm for constructive membership testing in the Suzuki groups Sz(q), where q = 2^{2m + 1} for some m > 0, in their natural representations of degree 4. It is a Las Vegas algorithm with running time O{log(q)} field operations, and a preprocessing step with running time O{log(q) loglog(q)} field operations. The latter step needs an oracle for the discrete logarithm problem in GF(q). We also produce a recognition algorithm for Sz(q) = . This is a Las Vegas algorithm with running time O{|X|^2} field operations. Finally, we give a Las Vegas algorithm that, given ^h = Sz(q) for some h in GL(4, q), finds some g such that ^g = Sz(q). The running time is O{log(q) loglog(q) + |X|} field operations. Implementations of the algorithms are available for the computer system MAGMA

    “Thus Far the Words of Jeremiah” But who gets the last word?

    Full text link
    I\u27ll never forget the first time a movie star talked to me. At the end of his television show, Roy Rogers looked right into the camera and sang to me, Happy trails to you, until we meet again. A similar thing happened to my children when Mister Rogers smiled into the camera and reassured them, I like you just the way you are. These moments stand out in our memory because it is so odd-even jarring when an actor or a storyteller steps outside the world of the story, as it were, and enters our own. Sometimes it becomes clear that there are actually three worlds involved: the world of the viewer, the world of the story and the world of the actor. This becomes apparent whenever actors look into the camera and take off their wigs, revealing the distance between themselves and the story

    Recognising the small Ree groups in their natural representations

    Full text link
    We present Las Vegas algorithms for constructive recognition and constructive membership testing of the Ree groups 2G_2(q) = Ree(q), where q = 3^{2m + 1} for some m > 0, in their natural representations of degree 7. The input is a generating set X. The constructive recognition algorithm is polynomial time given a discrete logarithm oracle. The constructive membership testing consists of a pre-processing step, that only needs to be executed once for a given X, and a main step. The latter is polynomial time, and the former is polynomial time given a discrete logarithm oracle. Implementations of the algorithms are available for the computer algebra system MAGMA
    • 

    corecore