8,389 research outputs found
Improvements in medical care and technology and reductions in traffic-related fatalities in Great Britain
Traffic-related fatalities in the UK have fallen dramatically over the last 30 years by about 50%. This decline has been observed in many other developed countries with similar rates of reduction. Many factors have been associated with this decline, including safer vehicle design, increased seat-belt use, changing demographics, and improved infrastructure. One factor not normally considered is the role that improved medical technology may have in reducing total traffic-related fatalities. This study analyzed cross-sectional time-series data in the UK to examine this relationship. Various proxies for medical technology improvement were included in a fixed effects negative binomial model to assess whether they are associated with reductions in traffic-related fatalities. Various demographic variables, such as age cohorts, GDP and changes in per-capita income are also included. The statistical methods employed control for heterogeneity in the data and therefore other factors that may affect the dependent variable for which data are not available do not need to be considered. Results suggest a strong relationship between improved medical technology and reductions in traffic-related fatalities as well as expected relationships with demographic factors. These results could imply that continued reductions in UK fatalities may be more difficult to achieve if medical technology improvements are diminishing, however, demographic changes will likely contribute to a further downward trend.
Identifying the period of a step change in high-yield processes
Quality control charts have proven to be very effective in detecting out-of-control states. When a signal is detected a search begins to identify and eliminate the source(s) of the signal. A critical issue that keeps the mind of the process engineer busy at this point is determining the time when the process first changed. Knowing when the process first changed can assist process engineers to focus efforts effectively on eliminating the source(s) of the signal. The time when a change in the process takes place is referred to as the change point. This paper provides an estimator for a period of time in which a step change in the process non-conformity proportion in high-yield processes occurs. In such processes, the number of items until the occurrence of the first non-conforming item can be modeled by a geometric distribution. The performance of the proposed model is investigated through several numerical examples. The results indicate that the proposed estimator provides a reasonable estimate for the period when the step change occurred at the process non-conformity level. Copyright © 2009 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/64306/1/1007_ftp.pd
Change-Point Methods for Overdispersed Count Data
A control chart is often used to detect a change in a process. Following a control chart signal, knowledge of the time and magnitude of the change would simplify the search for and identification of the assignable cause. In this research, emphasis is placed on count processes where overdispersion has occurred. Overdispersion is common in practice and occurs when the observed variance is larger than the theoretical variance of the assumed model. Although the Poisson model is often used to model count data, the two-parameter gamma-Poisson mixture parameterization of the negative binomial distribution is often a more adequate model for overdispersed count data. In this research effort, maximum likelihood estimators for the time of a step change in each of the parameters of the gamma-Poisson mixture model are derived. Monte Carlo simulation is used to evaluate the root mean square error performance of these estimators to determine their utility in estimating the change point, following a control chart signal. Results show that the estimators provide process engineers with accurate and useful estimates for the time of step change. In addition, an approach for estimating a confidence set for the process change point will be presented
A General Framework for Observation Driven Time-Varying Parameter Models
We propose a new class of observation driven time series models that we refer to as Generalized Autoregressive Score (GAS) models. The driving mechanism of the GAS model is the scaled likelihood score. This provides a unified and consistent framework for introducing time-varying parameters in a wide class of non-linear models. The GAS model encompasses other well-known models such as the generalized autoregressive conditional heteroskedasticity, autoregressive conditional duration, autoregressive conditional intensity and single source of error models. In addition, the GAS specification gives rise to a wide range of new observation driven models. Examples include non-linear regression models with time-varying parameters, observation driven analogues of unobserved components time series models, multivariate point process models with time-varying parameters and pooling restrictions, new models for time-varying copula functions and models for time-varying higher order moments. We study the properties of GAS models and provide several non-trivial examples of their application.dynamic models, time-varying parameters, non-linearity, exponential family, marked point processes, copulas
A unified approach to mortality modelling using state-space framework: characterisation, identification, estimation and forecasting
This paper explores and develops alternative statistical representations and
estimation approaches for dynamic mortality models. The framework we adopt is
to reinterpret popular mortality models such as the Lee-Carter class of models
in a general state-space modelling methodology, which allows modelling,
estimation and forecasting of mortality under a unified framework. Furthermore,
we propose an alternative class of model identification constraints which is
more suited to statistical inference in filtering and parameter estimation
settings based on maximization of the marginalized likelihood or in Bayesian
inference. We then develop a novel class of Bayesian state-space models which
incorporate apriori beliefs about the mortality model characteristics as well
as for more flexible and appropriate assumptions relating to heteroscedasticity
that present in observed mortality data. We show that multiple period and
cohort effect can be cast under a state-space structure. To study long term
mortality dynamics, we introduce stochastic volatility to the period effect.
The estimation of the resulting stochastic volatility model of mortality is
performed using a recent class of Monte Carlo procedure specifically designed
for state and parameter estimation in Bayesian state-space models, known as the
class of particle Markov chain Monte Carlo methods. We illustrate the framework
we have developed using Danish male mortality data, and show that incorporating
heteroscedasticity and stochastic volatility markedly improves model fit
despite an increase of model complexity. Forecasting properties of the enhanced
models are examined with long term and short term calibration periods on the
reconstruction of life tables.Comment: 46 page
Anomaly Detection in Activities of Daily Living with Linear Drift
Anomalyq detection in Activities of Daily Living (ADL) plays an important role in e-health applications. An abrupt change in the ADL performed by a subject might indicate that she/he needs some help. Another important issue related with e-health applications is the case where the change in ADL undergoes a linear drift, which occurs in cognitive decline, Alzheimer’s disease or dementia. This work presents a novel method for detecting a linear drift in ADL modelled as circular normal distributions. The method is based on techniques commonly used in Statistical Process Control and, through the selection of a convenient threshold, is able to detect and estimate the change point in time when a linear drift started. Public datasets have been used to assess whether ADL can be modelled by a mixture of circular normal distributions. Exhaustive experimentation was performed on simulated data to assess the validity of the change detection algorithm, the results showing that the difference between the real change point and the estimated change point was 4.90−1.98+3.17 days on average. ADL can be modelled using a mixture of circular normal distributions. A new method to detect anomalies following a linear drift is presented. Exhaustive experiments showed that this method is able to estimate the change point in time for processes following a linear drift
Does the support of innovative clusters sustainably foster R&D activity? Evidence from the German BioRegio and BioProfile contests
In this paper, we evaluate the R&D enhancing effects of two large public grant schemes aiming at encouraging the performance of firms organized in clusters. These are Germany's well known BioRegio and BioProfile contests for which we compare the research performance of winning regions in contrast with non-winning and non-participating comparison regions. We apply Difference-in-Difference estimation techniques in a generalized linear model framework, which allows to control for different initial regional conditions in the biotechnology related R&D activity. Our econometric findings support the view that winners generally outperform non-winning participants during the treatment period, thus indicating that exclusive funding as well as the stimulating effect of being a "winner" seems to work in the short-term. In contrast, no indirect impacts stemming from a potential mobilizing effect of the contest approaches have been detected. Also, we find only limited evidence for long-term effects of public R&D grants in the post-treatment period. The results of our analysis remain stable if we additionally augment the model to account for the particular role of spatial dependence in the R&D outcome variables.Biotechnology, R&D Policies, Cluster, Diff-in-Diff Estimation
COMMODITY R&D AND PROMOTION
Considerable evidence exists of high returns to public and private investment in commodity research and development programs. This study investigates the potential returns to product research, development, and marketing in a dynamic commodity-market model. Theoretical hypotheses derived from the solution to this model are tested in an empirical example of Washington apples. Estimation results show that, despite significant spillovers to research and promotion expenditure in this industry, there is nonetheless considerable latitude to increase annual sales.advertising, commodity, innovation, optimal control, Poisson model, research and development, Marketing, Research and Development/Tech Change/Emerging Technologies, L15, M37, Q13, Q16,
- …