1,819 research outputs found

    A Bayesian spatial random effects model characterisation of tumour heterogeneity implemented using Markov chain Monte Carlo (MCMC) simulation

    Get PDF
    The focus of this study is the development of a statistical modelling procedure for characterising intra-tumour heterogeneity, motivated by recent clinical literature indicating that a variety of tumours exhibit a considerable degree of genetic spatial variability. A formal spatial statistical model has been developed and used to characterise the structural heterogeneity of a number of supratentorial primitive neuroecto-dermal tumours (PNETs), based on diffusionweighted magnetic resonance imaging. Particular attention is paid to the spatial dependence of diffusion close to the tumour boundary, in order to determine whether the data provide statistical evidence to support the proposition that water diffusivity in the boundary region of some tumours exhibits a deterministic dependence on distance from the boundary, in excess of an underlying random 2D spatial heterogeneity in diffusion. Tumour spatial heterogeneity measures were derived from the diffusion parameter estimates obtained using a Bayesian spatial random effects model. The analyses were implemented using Markov chain Monte Carlo (MCMC) simulation. Posterior predictive simulation was used to assess the adequacy of the statistical model. The main observations are that the previously reported relationship between diffusion and boundary proximity remains observable and achieves statistical significance after adjusting for an underlying random 2D spatial heterogeneity in the diffusion model parameters. A comparison of the magnitude of the boundary-distance effect with the underlying random 2D boundary heterogeneity suggests that both are important sources of variation in the vicinity of the boundary. No consistent pattern emerges from a comparison of the boundary and core spatial heterogeneity, with no indication of a consistently greater level of heterogeneity in one region compared with the other. The results raise the possibility that DWI might provide a surrogate marker of intra-tumour genetic regional heterogeneity, which would provide a powerful tool with applications in both patient management and in cancer research

    Extracting a Robust U.S. Business Cycle Using a Time-Varying Multivariate Model-Based Bandpass Filter

    Get PDF
    In this paper we investigate whether the dynamic properties of the U.S. business cycle have changed in the last fifty years. For this purpose we develop a flexible business cycle indicator that is constructed from a moderate set of macroeconomic time series. The coincident economic indicator is based on a multivariate trend-cycle decomposition model that accounts for time variation in macroeconomic volatility, known as the great moderation. In particular, we consider an unobserved components time series model with a common cycle that is shared across different time series but adjusted for phase shift and amplitude. The extracted cycle can be interpreted as the result of a model-based bandpass filter and is designed to emphasize the business cycle frequencies that are of interest to applied researchers and policymakers. Stochastic volatility processes and mixture distributions for the irregular components and the common cycle disturbances enable us to account for all the heteroskedasticity present in the data. The empirical results are based on a Bayesian analysis and show that time-varying volatility is only present in the a selection of idiosyncratic components while the coefficients driving the dynamic properties of the business cycle indicator have been stable over time in the last fifty years.

    Hyperspectral image unmixing using a multiresolution sticky HDP

    Get PDF
    This paper is concerned with joint Bayesian endmember extraction and linear unmixing of hyperspectral images using a spatial prior on the abundance vectors.We propose a generative model for hyperspectral images in which the abundances are sampled from a Dirichlet distribution (DD) mixture model, whose parameters depend on a latent label process. The label process is then used to enforces a spatial prior which encourages adjacent pixels to have the same label. A Gibbs sampling framework is used to generate samples from the posterior distributions of the abundances and the parameters of the DD mixture model. The spatial prior that is used is a tree-structured sticky hierarchical Dirichlet process (SHDP) and, when used to determine the posterior endmember and abundance distributions, results in a new unmixing algorithm called spatially constrained unmixing (SCU). The directed Markov model facilitates the use of scale-recursive estimation algorithms, and is therefore more computationally efficient as compared to standard Markov random field (MRF) models. Furthermore, the proposed SCU algorithm estimates the number of regions in the image in an unsupervised fashion. The effectiveness of the proposed SCU algorithm is illustrated using synthetic and real data

    Damage Detection in Largely Unobserved Structures under Varying Environmental Conditions: An AutoRegressive Spectrum and Multi-Level Machine Learning Methodology

    Get PDF
    Vibration-based damage detection in civil structures using data-driven methods requires sufficient vibration responses acquired with a sensor network. Due to technical and economic reasons, it is not always possible to deploy a large number of sensors. This limitation may lead to partial information being handled for damage detection purposes, under environmental variability. To address this challenge, this article proposes an innovative multi-level machine learning method by employing the autoregressive spectrum as the main damage-sensitive feature. The proposed method consists of three levels: (i) distance calculation by the log-spectral distance, to increase damage detectability and generate distance-based training and test samples; (ii) feature normalization by an improved factor analysis, to remove environmental variations; and (iii) decision-making for damage localization by means of the Jensen-Shannon divergence. The major contributions of this research are represented by the development of the aforementioned multi-level machine learning method, and by the proposal of the new factor analysis for feature normalization. Limited vibration datasets relevant to a truss structure and consisting of acceleration time histories induced by shaker excitation in a passive system, have been used to validate the proposed method and to compare it with alternate, state-of-the-art strategies

    Extracting the Cyclical Component in Hours Worked: a Bayesian Approach

    Get PDF
    The series on average hours worked in the manufacturing sector is a key leading indicator of the U.S. business cycle. The paper deals with robust estimation of the cyclical component for the seasonally adjusted time series. This is achieved by an unobserved components model featuring an irregular component that is represented by a Gaussian mixture with two components. The mixture aims at capturing the kurtosis which characterizes the data. After presenting a Gibbs sampling scheme, we illustrate that the Gaussian mixture model provides a satisfactory representation of the data, allowing for the robust estimation of the cyclical component of per capita hours worked. Another important piece of evidence is that the outlying observations are not scattered randomly throughout the sample, but have a distinctive seasonal pattern. Therefore, seasonal adjustment plays a role. We ¯nally show that, if a °exible seasonal model is adopted for the unadjusted series, the level of outlier contamination is drastically reduced.Gaussian Mixtures, Robust signal extraction, State Space Models, Bayesian model selection, Seasonality

    Speech Modeling and Robust Estimation for Diagnosis of Parkinson’s Disease

    Get PDF

    Audio-visual football video analysis, from structure detection to attention analysis

    Get PDF
    Sport video is an important video genre. Content-based sports video analysis attracts great interest from both industry and academic fields. A sports video is characterised by repetitive temporal structures, relatively plain contents, and strong spatio-temporal variations, such as quick camera switches and swift local motions. It is necessary to develop specific techniques for content-based sports video analysis to utilise these characteristics. For an efficient and effective sports video analysis system, there are three fundamental questions: (1) what are key stories for sports videos; (2) what incurs viewer’s interest; and (3) how to identify game highlights. This thesis is developed around these questions. We approached these questions from two different perspectives and in turn three research contributions are presented, namely, replay detection, attack temporal structure decomposition, and attention-based highlight identification. Replay segments convey the most important contents in sports videos. It is an efficient approach to collect game highlights by detecting replay segments. However, replay is an artefact of editing, which improves with advances in video editing tools. The composition of replay is complex, which includes logo transitions, slow motions, viewpoint switches and normal speed video clips. Since logo transition clips are pervasive in game collections of FIFA World Cup 2002, FIFA World Cup 2006 and UEFA Championship 2006, we take logo transition detection as an effective replacement of replay detection. A two-pass system was developed, including a five-layer adaboost classifier and a logo template matching throughout an entire video. The five-layer adaboost utilises shot duration, average game pitch ratio, average motion, sequential colour histogram and shot frequency between two neighbouring logo transitions, to filter out logo transition candidates. Subsequently, a logo template is constructed and employed to find all transition logo sequences. The precision and recall of this system in replay detection is 100% in a five-game evaluation collection. An attack structure is a team competition for a score. Hence, this structure is a conceptually fundamental unit of a football video as well as other sports videos. We review the literature of content-based temporal structures, such as play-break structure, and develop a three-step system for automatic attack structure decomposition. Four content-based shot classes, namely, play, focus, replay and break were identified by low level visual features. A four-state hidden Markov model was trained to simulate transition processes among these shot classes. Since attack structures are the longest repetitive temporal unit in a sports video, a suffix tree is proposed to find the longest repetitive substring in the label sequence of shot class transitions. These occurrences of this substring are regarded as a kernel of an attack hidden Markov process. Therefore, the decomposition of attack structure becomes a boundary likelihood comparison between two Markov chains. Highlights are what attract notice. Attention is a psychological measurement of “notice ”. A brief survey of attention psychological background, attention estimation from vision and auditory, and multiple modality attention fusion is presented. We propose two attention models for sports video analysis, namely, the role-based attention model and the multiresolution autoregressive framework. The role-based attention model is based on the perception structure during watching video. This model removes reflection bias among modality salient signals and combines these signals by reflectors. The multiresolution autoregressive framework (MAR) treats salient signals as a group of smooth random processes, which follow a similar trend but are filled with noise. This framework tries to estimate a noise-less signal from these coarse noisy observations by a multiple resolution analysis. Related algorithms are developed, such as event segmentation on a MAR tree and real time event detection. The experiment shows that these attention-based approach can find goal events at a high precision. Moreover, results of MAR-based highlight detection on the final game of FIFA 2002 and 2006 are highly similar to professionally labelled highlights by BBC and FIFA
    corecore