256 research outputs found

    Spatial Distribution of Surface Soil Moisture under a Cornfield

    Get PDF
    Autocorrelation within surface soil moisture (SSM) data may be used to produce high-resolution spatial maps of SSM from point samples. The objective of this study was to characterize the temporal and spatial properties of SSM (0-5 cm) in a Beltsville, MD cornfield using capacitance probes. The range of spatial autocorrelation was approximately 10 m and the highest sill values were found at water contents (theta) between 20-27%. Nugget values represented a significant portion of the total variance (up to 50% for theta > 20% and 73% for theta 80 days) forecasts improved to 0.46-0.65. Forecasts were improved by autoregressive coefficients and additional SSM datasets

    Patch Autocorrelation Features: A translation and rotation invariant approach for image classification.

    Get PDF
    The autocorrelation is often used in signal processing as a tool for finding repeating patterns in a signal. In image processing, there are various image analysis techniques that use the autocorrelation of an image in a broad range of applications from texture analysis to grain density estimation. This paper provides an extensive review of two recently introduced and related frameworks for image representation based on autocorrelation, namely Patch Autocorrelation Features (PAF) and Translation and Rotation Invariant Patch Autocorrelation Features (TRIPAF). The PAF approach stores a set of features obtained by comparing pairs of patches from an image. More precisely, each feature is the euclidean distance between a particular pair of patches. The proposed approach is successfully evaluated in a series of handwritten digit recognition experiments on the popular MNIST data set. However, the PAF approach has limited applications, because it is not invariant to affine transformations. More recently, the PAF approach was extended to become invariant to image transformations, including (but not limited to) translation and rotation changes. In the TRIPAF framework, several features are extracted from each image patch. Based on these features, a vector of similarity values is computed between each pair of patches. Then, the similarity vectors are clustered together such that the spatial offset between the patches of each pair is roughly the same. Finally, the mean and the standard deviation of each similarity value are computed for each group of similarity vectors. These statistics are concatenated to obtain the TRIPAF feature vector. The TRIPAF vector essentially records information about the repeating patterns within an image at various spatial offsets. After presenting the two approaches, several optical character recognition and texture classification experiments are conducted to evaluate the two approaches. Results are reported on the MNIST (98.93%), the Brodatz (96.51%), and the UIUCTex (98.31%) data sets. Both PAF and TRIPAF are fast to compute and produce compact representations in practice, while reaching accuracy levels similar to other state-of-the-art methods

    Multispectral texture synthesis

    Get PDF
    Synthesizing texture involves the ordering of pixels in a 2D arrangement so as to display certain known spatial correlations, generally as described by a sample texture. In an abstract sense, these pixels could be gray-scale values, RGB color values, or entire spectral curves. The focus of this work is to develop a practical synthesis framework that maintains this abstract view while synthesizing texture with high spectral dimension, effectively achieving spectral invariance. The principle idea is to use a single monochrome texture synthesis step to capture the spatial information in a multispectral texture. The first step is to use a global color space transform to condense the spatial information in a sample texture into a principle luminance channel. Then, a monochrome texture synthesis step generates the corresponding principle band in the synthetic texture. This spatial information is then used to condition the generation of spectral information. A number of variants of this general approach are introduced. The first uses a multiresolution transform to decompose the spatial information in the principle band into an equivalent scale/space representation. This information is encapsulated into a set of low order statistical constraints that are used to iteratively coerce white noise into the desired texture. The residual spectral information is then generated using a non-parametric Markov Ran dom field model (MRF). The remaining variants use a non-parametric MRF to generate the spatial and spectral components simultaneously. In this ap proach, multispectral texture is grown from a seed region by sampling from the set of nearest neighbors in the sample texture as identified by a template matching procedure in the principle band. The effectiveness of both algorithms is demonstrated on a number of texture examples ranging from greyscale to RGB textures, as well as 16, 22, 32 and 63 band spectral images. In addition to the standard visual test that predominates the literature, effort is made to quantify the accuracy of the synthesis using informative and effective metrics. These include first and second order statistical comparisons as well as statistical divergence tests

    Machine learning-based algorithms to knowledge extraction from time series data: A review

    Get PDF
    To predict the future behavior of a system, we can exploit the information collected in the past, trying to identify recurring structures in what happened to predict what could happen, if the same structures repeat themselves in the future as well. A time series represents a time sequence of numerical values observed in the past at a measurable variable. The values are sampled at equidistant time intervals, according to an appropriate granular frequency, such as the day, week, or month, and measured according to physical units of measurement. In machine learning-based algorithms, the information underlying the knowledge is extracted from the data themselves, which are explored and analyzed in search of recurring patterns or to discover hidden causal associations or relationships. The prediction model extracts knowledge through an inductive process: the input is the data and, possibly, a first example of the expected output, the machine will then learn the algorithm to follow to obtain the same result. This paper reviews the most recent work that has used machine learning-based techniques to extract knowledge from time series data

    A neural network model to forecast and describe bond ratings

    Get PDF
    Neural Network;Bond Ratings;accountancy

    Modeling And Dynamic Resource Allocation For High Definition And Mobile Video Streams

    Get PDF
    Video streaming traffic has been surging in the last few years, which has resulted in an increase of its Internet traffic share on a daily basis. The importance of video streaming management has been emphasized with the advent of High Definition: HD) video streaming, as it requires by its nature more network resources. In this dissertation, we provide a better support for managing HD video traffic over both wireless and wired networks through several contributions. We present a simple, general and accurate video source model: Simplified Seasonal ARIMA Model: SAM). SAM is capable of capturing the statistical characteristics of video traces with less than 5% difference from their calculated optimal models. SAM is shown to be capable of modeling video traces encoded with MPEG-4 Part2, MPEG-4 Part10, and Scalable Video Codec: SVC) standards, using various encoding settings. We also provide a large and publicly-available collection of HD video traces along with their analyses results. These analyses include a full statistical analysis of HD videos, in addition to modeling, factor and cluster analyses. These results show that by using SAM, we can achieve up to 50% improvement in video traffic prediction accuracy. In addition, we developed several video tools, including an HD video traffic generator based on our model. Finally, to improve HD video streaming resource management, we present a SAM-based delay-guaranteed dynamic resource allocation: DRA) scheme that can provide up to 32.4% improvement in bandwidth utilization

    Periodicity detection and its application in lifelog data

    Get PDF
    Wearable sensors are catching our attention not only in industry but also in the market. We can now acquire sensor data from different types of health tracking devices like smart watches, smart bands, lifelog cameras and most smart phones are capable of tracking and logging information using built-in sensors. As data is generated and collected from various sources constantly, researchers have focused on interpreting and understanding the semantics of this longitudinal multi-modal data. One challenge is the fusion of multi-modal data and achieving good performance on tasks such activity recognition, event detection and event segmentation. The classical approach to process the data generated by wearable sensors has three main parts: 1) Event segmentation 2) Event recognition 3) Event retrieval. Many papers have been published in each of the three fields. This thesis has focused on the longitudinal aspect of the data from wearable sensors, instead of concentrating on the data over a short period of time. The following aspects are several key research questions in the thesis. Does longitudinal sensor data have unique features than can distinguish the subject generating the data from other subjects ? In other words, from the longitudinal perspective, does the data from different subjects share more common structure/similarity/identical patterns so that it is difficult to identify a subject using the data. If this is the case, what are those common patterns ? If we are able to eliminate those similarities among all the data, does the data show more specific features that we can use to model the data series and predict the future values ? If there are repeating patterns in longitudinal data, we can use different methods to compute the periodicity of the recurring patterns and furthermore to identify and extract those patterns. Following that we could be able to compare local data over a short time period with more global patterns in order to show the regularity of the local data. Some case studies are included in the thesis to show the value of longitudinal lifelog data related to a correlation of health conditions and training performance
    corecore