35,561 research outputs found

    Model Selection Criteria for Segmented Time Series from a Bayesian Approach to Information Compression

    Get PDF
    The principle that the simplest model capable of describing observed phenomena should also correspond to the best description has long been a guiding rule of inference. In this paper a Bayesian approach to formally implementing this principle is employed to develop model selection criteria for detecting structural change in financial and economic time series. Model selection criteria which allow for multiple structural breaks and which seek the optimal model order and parameter choices within regimes are derived. Comparative simulations against other popular information based model selection criteria are performed. Application of the derived criteria are also made to example financial and economic time series.Complexity theory; segmentation; break points; change points; model selection; model choice.

    PRESEE: An MDL/MML Algorithm to Time-Series Stream Segmenting

    Get PDF
    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmentingmainly focused on the issue of ameliorating precision instead of payingmuch attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream

    Improved Depth Map Estimation from Stereo Images based on Hybrid Method

    Get PDF
    In this paper, a stereo matching algorithm based on image segments is presented. We propose the hybrid segmentation algorithm that is based on a combination of the Belief Propagation and Mean Shift algorithms with aim to refine the disparity and depth map by using a stereo pair of images. This algorithm utilizes image filtering and modified SAD (Sum of Absolute Differences) stereo matching method. Firstly, a color based segmentation method is applied for segmenting the left image of the input stereo pair (reference image) into regions. The aim of the segmentation is to simplify representation of the image into the form that is easier to analyze and is able to locate objects in images. Secondly, results of the segmentation are used as an input of the local window-based matching method to determine the disparity estimate of each image pixel. The obtained experimental results demonstrate that the final depth map can be obtained by application of segment disparities to the original images. Experimental results with the stereo testing images show that our proposed Hybrid algorithm HSAD gives a good performance

    Content analysis: What are they talking about?

    Get PDF
    Quantitative content analysis is increasingly used to surpass surface level analyses in Computer-Supported Collaborative Learning (e.g., counting messages), but critical reflection on accepted practice has generally not been reported. A review of CSCL conference proceedings revealed a general vagueness in definitions of units of analysis. In general, arguments for choosing a unit were lacking and decisions made while developing the content analysis procedures were not made explicit. In this article, it will be illustrated that the currently accepted practices concerning the ‘unit of meaning’ are not generally applicable to quantitative content analysis of electronic communication. Such analysis is affected by ‘unit boundary overlap’ and contextual constraints having to do with the technology used. The analysis of e-mail communication required a different unit of analysis and segmentation procedure. This procedure proved to be reliable, and the subsequent coding of these units for quantitative analysis yielded satisfactory reliabilities. These findings have implications and recommendations for current content analysis practice in CSCL research

    Extended BIC Criterion for Model Selection

    Get PDF
    Model selection is commonly based on some variation of the BIC or minimum message length criteria, such as MML and MDL. In either case the criterion is split into two terms: one for the model (data code length/model complexity) and one for the data given the model (message length/data likelihood). For problems such as change detection, unsupervised segmentation or data clustering it is common practice for the model term to comprise only a sum of sub-model terms. In this paper it is shown that the full model complexity must also take into account the number of sub models and the labels which assign data to each sub model. From this analysis we derive an extended BIC approach (EBIC) for this class of problem. Results with artificial data are given to illustrate the properties of this procedure
    • 

    corecore