20,575 research outputs found

    Innovative observing strategy and orbit determination for Low Earth Orbit Space Debris

    Full text link
    We present the results of a large scale simulation, reproducing the behavior of a data center for the build-up and maintenance of a complete catalog of space debris in the upper part of the low Earth orbits region (LEO). The purpose is to determine the performances of a network of advanced optical sensors, through the use of the newest orbit determination algorithms developed by the Department of Mathematics of Pisa (DM). Such a network has been proposed to ESA in the Space Situational Awareness (SSA) framework by Carlo Gavazzi Space SpA (CGS), Istituto Nazionale di Astrofisica (INAF), DM, and Istituto di Scienza e Tecnologie dell'Informazione (ISTI-CNR). The conclusion is that it is possible to use a network of optical sensors to build up a catalog containing more than 98% of the objects with perigee height between 1100 and 2000 km, which would be observable by a reference radar system selected as comparison. It is also possible to maintain such a catalog within the accuracy requirements motivated by collision avoidance, and to detect catastrophic fragmentation events. However, such results depend upon specific assumptions on the sensor and on the software technologies

    Multiple solutions for asteroid orbits: Computational procedure and applications

    Get PDF
    We describe the Multiple Solutions Method, a one-dimensional sampling of the six-dimensional orbital confidence region that is widely applicable in the field of asteroid orbit determination. In many situations there is one predominant direction of uncertainty in an orbit determination or orbital prediction, i.e., a ``weak'' direction. The idea is to record Multiple Solutions by following this, typically curved, weak direction, or Line Of Variations (LOV). In this paper we describe the method and give new insights into the mathematics behind this tool. We pay particular attention to the problem of how to ensure that the coordinate systems are properly scaled so that the weak direction really reflects the intrinsic direction of greatest uncertainty. We also describe how the multiple solutions can be used even in the absence of a nominal orbit solution, which substantially broadens the realm of applications. There are numerous applications for multiple solutions; we discuss a few problems in asteroid orbit determination and prediction where we have had good success with the method. In particular, we show that multiple solutions can be used effectively for potential impact monitoring, preliminary orbit determination, asteroid identification, and for the recovery of lost asteroids

    Orbit determination of space objects based on sparse optical data

    Get PDF
    While building up a catalog of Earth orbiting objects, if the available optical observations are sparse, not deliberate follow ups of specific objects, no orbit determination is possible without previous correlation of observations obtained at different times. This correlation step is the most computationally intensive, and becomes more and more difficult as the number of objects to be discovered increases. In this paper we tested two different algorithms (and the related prototype software) recently developed to solve the correlation problem for objects in geostationary orbit (GEO), including the accurate orbit determination by full least squares solutions with all six orbital elements. Because of the presence in the GEO region of a significant subpopulation of high area to mass objects, strongly affected by non-gravitational perturbations, it was actually necessary to solve also for dynamical parameters describing these effects, that is to fit between 6 and 8 free parameters for each orbit. The validation was based upon a set of real data, acquired from the ESA Space Debris Telescope (ESASDT) at the Teide observatory (Canary Islands). We proved that it is possible to assemble a set of sparse observations into a set of objects with orbits, starting from a sparse time distribution of observations, which would be compatible with a survey capable of covering the region of interest in the sky just once per night. This could result in a significant reduction of the requirements for a future telescope network, with respect to what would have been required with the previously known algorithm for correlation and orbit determination.Comment: 20 pages, 8 figure

    Orbit Determination with the two-body Integrals

    Get PDF
    We investigate a method to compute a finite set of preliminary orbits for solar system bodies using the first integrals of the Kepler problem. This method is thought for the applications to the modern sets of astrometric observations, where often the information contained in the observations allows only to compute, by interpolation, two angular positions of the observed body and their time derivatives at a given epoch; we call this set of data attributable. Given two attributables of the same body at two different epochs we can use the energy and angular momentum integrals of the two-body problem to write a system of polynomial equations for the topocentric distance and the radial velocity at the two epochs. We define two different algorithms for the computation of the solutions, based on different ways to perform elimination of variables and obtain a univariate polynomial. Moreover we use the redundancy of the data to test the hypothesis that two attributables belong to the same body (linkage problem). It is also possible to compute a covariance matrix, describing the uncertainty of the preliminary orbits which results from the observation error statistics. The performance of this method has been investigated by using a large set of simulated observations of the Pan-STARRS project.Comment: 23 pages, 1 figur

    Homogenized model for herringbone bond masonry: linear elastic and limit analysis

    Get PDF
    A kinematic procedure to obtain in-plane elastic moduli and macroscopic masonry strength domains in the case of herringbone masonry is presented. The model is constituted by two central bricks interacting with their neighbors by means of either elastic or rigidplastic interfaces with friction, representing mortar joints. A sub-class of possible elementary deformations is a-priori chosen to describe joints cracking under in- plane loads. Suitable internal macroscopic actions are applied on the Representative Element of Volume REV and the power expended within the 3D bricks assemblage is equated to that expended in the macroscopic 2D Cauchy continuum. The elastic and limit analysis problem at a cell level are solved by means of a quadratic and linear programming approach, respectively. When dealing with the limit analysis approach, several computations are performed investigating the role played by (1) the direction of the load with respect to herringbone bond pattern inclination and (2) masonry textur

    Efficiency of a wide-area survey in achieving short- and long-term warning for small impactors

    Full text link
    We consider a network of telescopes capable of scanning all the observable sky each night and targeting Near-Earth objects (NEOs) in the size range of the Tunguska-like asteroids, from 160 m down to 10 m. We measure the performance of this telescope network in terms of the time needed to discover at least 50% of the impactors in the considered population with a warning time large enough to undertake proper mitigation actions. The warning times are described by a trimodal distribution and the telescope network has a 50% probability of discovering an impactor of the Tunguska class with at least one week of advance already in the first 10 yr of operations of the survey. These results suggest that the studied survey would be a significant addition to the current NEO discovery efforts

    Does Global Slack Matter More than Domestic Slack in Determining U.S. Inflation?

    Get PDF
    This paper employs a structural model to estimate whether global output gap has become an important determinant of U.S. inflation dynamics. The results provide support for the relevance of global slack as a determinant of U.S. inflation after 1985. The role of domestic output gap, instead, seems to have diminished over time.Globalization; Global Slack; Inflation Dynamics; Phillips Curve; Bayesian Estimation

    Political Business Cycles in the New Keynesian Model

    Get PDF
    This paper tests various Political Business Cycle theories in a New Keynesian model with a monetary and fiscal policy mix. All the policy coefficients, the target levels of inflation and the budget deficit, the firms' frequency of price setting, and the standard deviations of the structural shocks are allowed to depend on 'political' regimes: a pre-election vs. post-election regime, a regime that depends on whether the President (or the Fed Chairman) is a Democrat or a Republican, and a regime under which the President and the Fed Chairman share party affiliation in pre-election quarters or not. The model is estimated using full-information Bayesian methods. The assumption of rational expectations is relaxed: economic agents can learn about the effect of political variables over time. The results provide evidence that several coefficients depend on political variables. The best-fitting specification is one that allows coefficients to depend on a pre-election vs. non-election regime. Monetary policy becomes considerably more inertial before elections and fiscal policy deviations from a simple rule are more common. The results overall support the view of an independent Fed that avoids taking policy decisions right before elections. There is some evidence, however, that policies become more expansionary before elections, but this evidence seems to disappear in the post-1985 sample. The estimates also indicate that firms similarly delay their price-setting decisions until after the upcoming Presidential election.Political business cycles; Opportunistic cycles; Partisan cycles; Monetary and fiscal policy; Adaptive learning; Bayesian estimation

    Expectations, Learning and Macroeconomic Persistence

    Get PDF
    This paper presents an estimated model with learning and provides evidence that learning can improve the fit of popular monetary DSGE models and endogenously generate realistic levels of persistence. The paper starts with an agnostic view, developing a model that nests learning and some of the structural sources of persistence, such as habit formation in consumption and inflation indexation, that are typically needed in monetary models with rational expectations to match the persistence of macroeconomic variables. I estimate the model by likelihood-based Bayesian methods, which allow the estimation of the learning gain coefficient jointly with the "deep" parameters of the economy. The empirical results show that when learning replaces rational expectations, the estimated degrees of habits and indexation drop near zero. This ?nding suggests that persistence arises in the model economy mainly from expectations and learning. The posterior model probabilities show that the specification with learning fits significantly better than does the specification with rational expectations. Finally, if learning rather than mechanical sources of persistence provides a more appropriate representation of the economy, the implied optimal policy will be different. The policymaker will also incur substantial costs from misspecifying private expectations formation.Persistence, Constant-gain learning, Expectations, Habit formation in consumption, Inflation inertia; Phillips curve; Bayesian econometrics; New-Keynesian model.

    Expectations, Learning and Macroeconomic Persistence

    Get PDF
    This paper presents an estimated model with learning and provides evidence that learning can improve the fit of popular monetary DSGE models and endogenously generate realistic levels of persistence. The paper starts with an agnostic view, developing a model that nests learning and some of the structural sources of persistence, such as habit formation in consumption and inflation indexation, that are typically needed in monetary models with rational expectations to match the persistence of macroeconomic variables. I estimate the model by likelihood-based Bayesian methods, which allow the estimation of the learning gain coefficient jointly with the `deep' parameters of the economy. The empirical results show that when learning replaces rational expectations, the estimated degrees of habits and indexation drop near zero. This finding suggests that persistence arises in the model economy mainly from expectations and learning. The posterior model probabilities show that the specification with learning fits significantly better than does the specification with rational expectations. Finally, if learning rather than mechanical sources of persistence provides a more appropriate representation of the economy, the implied optimal policy will be different. The policymaker will also incur substantial costs from misspecifying private expectations formation.persistence, constant-gain learning, expectations, habit formation in consumption, inflation inertia, Phillips curve, Bayesian econometrics, New-Keynesian model.
    • …
    corecore