11 research outputs found

    Markovian Processes, Two-Sided Autoregressions and Finite-Sample Inference for Stationary and Nonstationary Autoregressive Processes

    Get PDF
    In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method. Dans cet article, nous proposons des procédures d'inférence valides à distance finie pour des modèles autorégressifs (AR) stationnaires et non-stationnaires. La méthode suggérée est fondée sur des propriétés particulières des processus markoviens combinées à une technique de subdivision d'échantillon. Les résultats sur les processus de Markov (indépendance intercalaire, troncature) ne requièrent que l'existence de densités conditionnelles. Nous démontrons les propriétés requises pour des processus markoviens multivariés possiblement non-stationnaires et non-gaussiens. Pour le cas des modèles de régression linéaires avec erreurs autorégressives d'ordre un, nous montrons comment utiliser ces résultats afin de simplifier les propriétés distributionnelles du modèle en considérant la distribution conditionnelle d'une partie des observations étant donné le reste. Cette transformation conduit à un nouveau modèle qui a la forme d'une autorégression bilatérale à laquelle on peut appliquer les techniques usuelles d'analyse des modèles de régression linéaires. Nous montrons comment obtenir des tests et régions de confiance pour la moyenne et les paramètres autorégressifs du modèle. Nous proposons aussi un test pour l'ordre d'une autorégression. Nous montrons qu'une technique de combinaison de tests obtenus à partir de plusieurs sous-échantillons peut améliorer la performance de la procédure. Enfin la méthode est appliquée à un modèle de l'investissement aux États-Unis.Time series, Markov process, autoregressive process, autocorrelation, dynamic model, distributed-lag model, two-sided autoregression, intercalary independence, exact test, finite-sample test, Ogawara-Hannan, investment, Séries chronologiques, processus de Markov, processus autorégressif, autocorrélation, modèle dynamique, modèle à retards échelonnés, autorégression bilatérale, indépendance intercalaire, test exact, Ogawara-Hannan, investissement

    An Exhaustive Study of Particular Cases Leading to Robust and Accurate Motion Estimation

    Get PDF
    International audienceFor decades, there has been an intensive research effort in the Computer Vision community to deal with video sequences. In this paper, we present a new method for recovering a maximum of information on displacement and projection parameters in monocular video sequences without calibration. This work follows previous studies on particular cases of displacement, scene geometry and camera analysis and focuses on the particular forms of homographic matrices. It is already known that the number of particular cases involved in a complete study precludes an exhaustive test. To lower the algorithmic complexity, some authors propose to decompose all possible cases in a hierarchical tree data structure but these works are still in development [26]. In this paper, we propose a new way to deal with the huge number of particular cases: (i) we use simple rules in order to eliminate some redundant cases and some physically impossible cases, and (ii) we divide the cases into subsets corresponding to particular forms determined by simple rules leading to a computationally efficient discrimination method. Finally, some experiments were performed on image sequences acquired either using a robotic system or manually in order to demonstrate that when several models are valid, the model with the fewer parameters gives the best estimation, regarding the free parameters of the problem. The experiments presented in this paper show that even if the selected case is an approximation of reality, the method is still robust

    Recent Developments in Cointegration

    Get PDF
    It is well known that inference on the cointegrating relations in a vector autoregression (CVAR) is difficult in the presence of a near unit root. The test for a given cointegration vector can have rejection probabilities under the null, which vary from the nominal size to more than 90%. This paper formulates a CVAR model allowing for multiple near unit roots and analyses the asymptotic properties of the Gaussian maximum likelihood estimator. Then two critical value adjustments suggested by McCloskey (2017) for the test on the cointegrating relations are implemented for the model with a single near unit root, and it is found by simulation that they eliminate the serious size distortions, with a reasonable power for moderate values of the near unit root parameter. The findings are illustrated with an analysis of a number of different bivariate DGPs

    Project Scheduling Disputes: Expert Characterization and Estimate Aggregation

    Get PDF
    Project schedule estimation continues to be a tricky endeavor. Stakeholders bring a wealth of experience to each project, but also biases which could affect their final estimates. This research proposes to study differences among stakeholders and develop a method to aggregate multiple estimates into a single estimate a project manager can defend. Chapter 1 provides an overview of the problem. Chapter 2 summarizes the literature on historical scheduling issues, scheduling best practices, decision analysis, and expert aggregation. Chapter 3 describes data collection/processing, while Chapter 4 provides the results. Chapter 5 provides a discussion of the results, and Chapter 6 provides a summary and recommendation for future work. The research consists of two major parts. The first part categorizes project stakeholders by three major demographics: “position”, “years of experience”, and “level of formal education”. Subjects were asked to answer several questions on risk aversion, project constraints, and general opinions on scheduling struggles. Using Design of Experiments (DOE), responses were compared to the different demographics to determine whether or not certain attitudes concentrated themselves within certain demographics. Subjects were then asked to provide activity duration and confidence estimates across several projects, as well as opinions on the activity list itself. DOE and Bernoulli trials were used to determine whether or not subjects within different demographics estimated differently from one another. Correlation coefficients among various responses were then calculated to determine if certain attitudes affected activity duration estimates. The second part of this research dealt primarily with aggregation of opinions on activity durations. The current methodology uses the Program Evaluation and Review (PERT) technique of calculating the expected value and variance of an activity duration based on three inputs and assuming the unknown duration follows a Beta distribution. This research proposes a methodology using Morris’ Bayesian belief-updating methods and unbounded distributions to aggregate multiple expert opinions. Using the same three baseline estimates, this methodology combines multiple opinions into one expected value and variance which can then be used in a network schedule. This aggregated value represents the combined knowledge of the project stakeholders which helps mitigate biases engrained in a single expert’s opinion

    Recent Developments in Cointegration

    Get PDF

    Pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima

    Get PDF
    In this thesis, we present an approach to the production process specification and generation based on the model-driven paradigm, with the goal to increase the flexibility of factories and respond to the challenges that emerged in the era of Industry 4.0 more efficiently. To formally specify production processes and their variations in the Industry 4.0 environment, we created a novel domain-specific modeling language, whose models are machine-readable. The created language can be used to model production processes that can be independent of any production system, enabling process models to be used in different production systems, and process models used for the specific production system. To automatically transform production process models dependent on the specific production system into instructions that are to be executed by production system resources, we created an instruction generator. Also, we created generators for different manufacturing documentation, which automatically transform production process models into manufacturing documents of different types. The proposed approach, domain-specific modeling language, and software solution contribute to introducing factories into the digital transformation process. As factories must rapidly adapt to new products and their variations in the era of Industry 4.0, production must be dynamically led and instructions must be automatically sent to factory resources, depending on products that are to be created on the shop floor. The proposed approach contributes to the creation of such a dynamic environment in contemporary factories, as it allows to automatically generate instructions from process models and send them to resources for execution. Additionally, as there are numerous different products and their variations, keeping the required manufacturing documentation up to date becomes challenging, which can be done automatically by using the proposed approach and thus significantly lower process designers' time.У овој дисертацији представљен је приступ спецификацији и генерисању производних процеса заснован на инжењерству вођеном моделима, у циљу повећања флексибилности постројења у фабрикама и ефикаснијег разрешавања изазова који се појављују у ери Индустрије 4.0. За потребе формалне спецификације производних процеса и њихових варијација у амбијенту Индустрије 4.0, креиран је нови наменски језик, чије моделе рачунар може да обради на аутоматизован начин. Креирани језик има могућност моделовања производних процеса који могу бити независни од производних система и тиме употребљени у различитим постројењима или фабрикама, али и производних процеса који су специфични за одређени систем. Како би моделе производних процеса зависних од конкретног производног система било могуће на аутоматизован начин трансформисати у инструкције које ресурси производног система извршавају, креиран је генератор инструкција. Такође су креирани и генератори техничке документације, који на аутоматизован начин трансформишу моделе производних процеса у документе различитих типова. Употребом предложеног приступа, наменског језика и софтверског решења доприноси се увођењу фабрика у процес дигиталне трансформације. Како фабрике у ери Индустрије 4.0 морају брзо да се прилагоде новим производима и њиховим варијацијама, неопходно је динамички водити производњу и на аутоматизован начин слати инструкције ресурсима у фабрици, у зависности од производа који се креирају у конкретном постројењу. Тиме што је у предложеном приступу могуће из модела процеса аутоматизовано генерисати инструкције и послати их ресурсима, доприноси се креирању једног динамичког окружења у савременим фабрикама. Додатно, услед великог броја различитих производа и њихових варијација, постаје изазовно одржавати неопходну техничку документацију, што је у предложеном приступу могуће урадити на аутоматизован начин и тиме значајно уштедети време пројектаната процеса.U ovoj disertaciji predstavljen je pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima, u cilju povećanja fleksibilnosti postrojenja u fabrikama i efikasnijeg razrešavanja izazova koji se pojavljuju u eri Industrije 4.0. Za potrebe formalne specifikacije proizvodnih procesa i njihovih varijacija u ambijentu Industrije 4.0, kreiran je novi namenski jezik, čije modele računar može da obradi na automatizovan način. Kreirani jezik ima mogućnost modelovanja proizvodnih procesa koji mogu biti nezavisni od proizvodnih sistema i time upotrebljeni u različitim postrojenjima ili fabrikama, ali i proizvodnih procesa koji su specifični za određeni sistem. Kako bi modele proizvodnih procesa zavisnih od konkretnog proizvodnog sistema bilo moguće na automatizovan način transformisati u instrukcije koje resursi proizvodnog sistema izvršavaju, kreiran je generator instrukcija. Takođe su kreirani i generatori tehničke dokumentacije, koji na automatizovan način transformišu modele proizvodnih procesa u dokumente različitih tipova. Upotrebom predloženog pristupa, namenskog jezika i softverskog rešenja doprinosi se uvođenju fabrika u proces digitalne transformacije. Kako fabrike u eri Industrije 4.0 moraju brzo da se prilagode novim proizvodima i njihovim varijacijama, neophodno je dinamički voditi proizvodnju i na automatizovan način slati instrukcije resursima u fabrici, u zavisnosti od proizvoda koji se kreiraju u konkretnom postrojenju. Time što je u predloženom pristupu moguće iz modela procesa automatizovano generisati instrukcije i poslati ih resursima, doprinosi se kreiranju jednog dinamičkog okruženja u savremenim fabrikama. Dodatno, usled velikog broja različitih proizvoda i njihovih varijacija, postaje izazovno održavati neophodnu tehničku dokumentaciju, što je u predloženom pristupu moguće uraditi na automatizovan način i time značajno uštedeti vreme projektanata procesa

    Statistical inference for intensity-based load sharing models with damage accumulation

    Get PDF
    Consider a system in which a load exerted on it is equally shared between its components. Whenever one component fails, the total load is redistributed across the surviving components. This in turn increases the individual load applied to each of these components and therefore their risk of failure. Such a system is called a load sharing system. In a load sharing system, the failure rate of a surviving component grows with the number of failed components. However, the risk of failure is likely to also depend on how long the surviving components were exposed to the shared load. This accumulation of damage within the system causes a continuous increase in the failure rate between consecutive component failures. This thesis deals with the statistical inference for load sharing systems with damage accumulation that can be modelled in terms of its component failure rate. We identify the component failure rate as the stochastic intensity of a counting process, for which a parametric model can be specified - an intensity-based load sharing model with damage accumulation. The first method of inference is the minimum distance estimator introduced by Kopperschmidt and Stute. They claim the strong consistency and asymptotic normality of this estimator, but we demonstrate that their proof of the asymptotic distribution is flawed. Our first important contribution is a corrected proof under slightly adjusted requirements. The second method of inference is based on the K-sign depth test, a powerful and robust generalization of the classical sign test that was up to now mostly used with the residuals of a linear model. We present a procedure to obtain a "residual" counterpart in an intensity-based model via the hazard transformation of a point process. Moreover, we derive conditions on the model under which the 3-sign depth test is consistent. The thesis closes by comparing these two methods with the established likelihood approach. To this end, we verify the applicability of the competing methods to the Basquin load sharing model with multiplicative damage accumulation recently proposed by Müller and Meyer. In a final simulation study, we assess the robustness of the methods in the presence of contaminated data. This study confirms that, in contrast to the other two approaches, the 3-sign depth test offers both a powerful and robust tool of statistical inference for intensity-based load sharing models with damage accumulation

    Expectations : some econometric aspects

    No full text
    To express an economic model Including expectations in a manageable form econometricians frequently replace expectationa with a functional form resulting from an hypotheala as to their formation. The most popular expectationa hypothesea are overviewed in Chapter 2. Survey data, if available and suitable, are an alternative to expectations hypothesea to represent expectations. How to ascertain the suitability of a survey aeries, and expectations hypothesis, Is discussed throughout the thesis. Identification of models involving expectations, conditional on the assumed expectations hypothesis, is examined by introducing e third class of economic variable, expectations, which enables the contribution to (or sub tr ac t i onfrom) ldentlflability, of expectations, to be assessed. The naive, adaptive and extrapolative hypotheses in general do not effect the ldentlflability of a model identified when expectations are assumed to be exogenous. The distributed lag and weakly rational models of expects* tions require additional assumptions to enable a model including them to be Identified. These findings are elaborated in Chapter 3. It is demonstrated that whether or not current fully rational expectations models are identified may depend upon the assumptions made regarding the information set utilized in forming the rational expectation. Conditions for ident iflability given Wickens' assumptions are developed and compared with those derived by Wallis, given his assumptions, lhe conditions are found to differ. Efficient conditional estimation of expectations models is discussed. FIML methods are appropriate in all instances; in general non linear, cross equation restrictions are involved when any hypothesis (except of course the naive) is assumed true. Consistent estimation methods for current and future rational expectations models which have been suggested in the literature are reviewed. A generalized time series approach producing consistent estimates of future rational expectations models is developed. As identification and estimation of expectations models are conditional upon the assumed hypothesis being true it is desirable to test the validity of this assumption. A model selection procedure is advanced as an improvement on methods which isolate and test a single hypothesis. Whether or not survey data are available a model selection procedure is appropriate. The model selection procedure is also proposed to select an appropriate series when a number are available. All procedures extend to include a number of structural models. An example of the model selection procedure to choose a suitable series and structural model simultaneously is presented. Many and varied survey series are required but these series should be quantitative not qualitative. Qualitative series need to be transformed to quantitative series for use in econometric models. The behavioural models underlying transformation methods may not be representative of the survey group; a number of plausible behavioural models not consistent with the transformation methods are discussed in Chapter 5. The often used Carlson-Parkin transformation method assumes the distribution of the variable concerned, across respondents, is normal. This assumption is tested in Chapter 6, with respect to expected inflation, and found to be invalid. Data obtained by such a transformation method should be carefully considered before being employed

    Dark matter search in the top-quark sector with the ATLAS detector at the LHC

    Get PDF
    Astronomical and cosmological observations support the existence of invisible matter that can only be detected through its gravitational effects, thus making it very difficult to study. This component, called dark matter, makes up about 26.8\% of the known universe. Experiments at the LHC, located at CERN, search for new particles to be dark matter candidates. A dark matter production can consist of an excess of events with a single final-state object X recoiling against large amount of missing momentum of energy called mono-X signal. The studies presented in this thesis are focused on the mono-X signature, with X being a top quark, named mono-top. The topology is studied, where the WW boson from the associated top-quark decays into a lepton (electron or a muon) and a neutrino. Firstly, a sensitivity search of dark matter production in an extension of the Standard Model featuring a two-Higgs-doublet model and an additional pseudo-scalar is presented. The pseudo-scalar is the mediator which decays to the dark matter particles. This analysis uses all the data collected by the ATLAS experiment at the LHC during Run 2 (2015--2018) corresponding to an integrated luminosity of 139 fb1^{-1}, at a centre-of-mass energy of 13 TeV. A multivariate analysis based on a boosted decision tree is performed in order to enhance the discrimination of signal events from the main background. The results are expressed as 95% confidence level limits on the parameters of the signal models considered. No significant excess is found with respect to Standard Model predictions. The region below of mH±m_{H^{\pm}} = 800 GeV (1100 GeV) with tanβ\tan\beta = 0.3 is excluded with 95% confidence level of the observed (expected) limit. Furthermore, the prospect of the potential discovery of the non-resonant production of an exotic state decaying into a pair of dark matter candidates in association with a right-handed top quark in the context of an effective dark matter flavour-changing neutral interaction, at the HL-LHC is presented as well. The HL-LHC project is expected to operate at a centre-of-mass energy of 14 TeV aiming to provide a total integrated luminosity of 3000 fb1^{-1}. The number of signal and background events is estimated from simulated particle-level truth information after applying smearing functions to mimic an upgraded ATLAS detector response in the HL-LHC environment. The expected exclusion limit (discovery reach) at 95% confidence level for the mass of the exotic state is calculated to be 4.6 TeV (4.0 TeV), using a multivariate analysis based on a boosted decision tree
    corecore