621 research outputs found

    Particle detection and tracking in fluorescence time-lapse imaging: a contrario approach

    Full text link
    This paper proposes a probabilistic approach for the detection and the tracking of particles in fluorescent time-lapse imaging. In the presence of a very noised and poor-quality data, particles and trajectories can be characterized by an a contrario model, that estimates the probability of observing the structures of interest in random data. This approach, first introduced in the modeling of human visual perception and then successfully applied in many image processing tasks, leads to algorithms that neither require a previous learning stage, nor a tedious parameter tuning and are very robust to noise. Comparative evaluations against a well-established baseline show that the proposed approach outperforms the state of the art.Comment: Published in Journal of Machine Vision and Application

    Development and evaluation of low cost 2-d lidar based traffic data collection methods

    Get PDF
    Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable. To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays. A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89. Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection

    Density forecasting in financial risk modelling

    Get PDF
    As a result of an increasingly stringent regulation aimed at monitoring financial risk exposures, nowadays the risk measurement systems play a crucial role in all banks. In this thesis we tackle a variety of problems, related to density forecasting, which are fundamental to market risk managers. The computation of risk measures (e.g. Value-at-Risk) for any portfolio of financial assets requires the generation of density forecasts for the driving risk factors. Appropriate testing procedures must then be identified for an accurate appraisal of these forecasts. We start our research by assessing whether option-implied densities, which constitute the most obvious forecasts of the distribution of the underlying asset at expiry, do actually represent unbiased forecasts. We first extract densities from options on currency and equity index futures, by means of both traditional and original specifications. We then appraise them, via rigorous density forecast evaluation tools, and we find evidence of the presence of biases. In the second part of the thesis, we focus on modelling the dynamics of the volatility curve, in order to measure the vega risk exposure for various delta-hedged option portfolios. We propose to use a linear Kalman filter approach, which gives more precise forecasts of the vega risk exposure than alternative, well-established models. In the third part, we derive a continuous time model for the dynamics of equity index returns from a data set of 5-minute returns. A model inferred from high-frequency typical of risk measures calculations. The last part of our work deals with evaluating density forecasts of the joint distribution of the risk factors. We find that, given certain specifications for the multivariate density forecast, a goodness-of-fit procedure based on the Empirical Characteristic Function displays good statistical properties in detecting misspecifications of different nature in the forecasts

    APPLICATION OF DATA FUSION TO FLUID DYNAMIC DATA

    Get PDF
    In recent years, there have been improvements in the methods of obtaining fluid dynamic data, which has led to the generation of vast amounts of data. Extracting the useful information from large data sets can be a challenging task when investigating data from a single source. However, most experiments use data from multiple sources, such as particle image velocimetry (PIV), pressure sensors, acoustic measurements, and computational fluid dynamics (CFD), to name a few. Knowing the strengths and weaknesses of each measurement technique, one can fuse the data together to improve the understanding of the problem being studied. Concepts from the data fusion community are used to combine fluid dynamic data from the different data sources. The data is fused using techniques commonly used by the fluid dynamics community, such as proper orthogonal decomposition (POD), linear stochastic estimation (LSE), and wavelet analysis. This process can generate large quantities of data and a method of handling all of the data and the techniques in an efficient manner is required. To accomplish this, a framework was developed that is capable of tracking, storing, and, manipulating data. With the framework and techniques, data fusion can be applied. Data fusion is first applied to a synthetic data set to determine the best methods of fusing data. Data fusion was then applied to airfoil data that was obtained from PIV, CFD, and pressure to test the ideas from the synthetic data. With the knowledge gained from applying fusion to the synthetic data and airfoil data, these techniques are ultimately applied to data for a Mach 0.6 jet obtained from large-window PIV (LWPIV), time-resolved PIV (TRPIV), and pressure. Through the fusion of the different data sets, occlusion in the jet data were estimated within 6% error using a new POD based technique called Fused POD. In addition, a technique called Dynamic Gappy POD was created to fuse TRPIV and LWPIV to generate a large-window time-resolved data set. This technique had less error than other standard techniques for accomplishing this such as pressure-based stochastic estimation. The work presented in this document lays the groundwork for future applications of data fusion to fluid dynamic data. With the success of the work in this document, one can begin to apply the ideas from data fusion to other types of fluid dynamic problems, such as bluff bodies, unsteady aerodynamics, and other. These ideas could be used to help improve understanding in the field of fluid dynamics due to the current limitations of obtaining data and the need to better understand flow phenomena

    Detection and compensation of anomalous conditions in a wind turbine

    Get PDF
    Anomalies in the wind field and structural anomalies can cause unbalanced loads on the components and structure of a wind turbine. For example, large unbalanced rotor loads could arise from blades sweeping through low level jets resulting in wind shear, which is an example of anomaly. The lifespan of the blades could be increased if wind shear can be detected and appropriately compensated. The work presented in this paper proposes a novel anomaly detection and compensation scheme based on the Extended Kalman Filter. Simulation results are presented demonstrating that it can successfully be used to facilitate the early detection of various anomalous conditions, including wind shear, mass imbalance, aerodynamic imbalance and extreme gusts, and also that the wind turbine controllers can subsequently be modified to take appropriate diagnostic action to compensate for such anomalous conditions

    Density forecasting in financial risk modelling

    Get PDF
    As a result of an increasingly stringent regulation aimed at monitoring financial risk exposures, nowadays the risk measurement systems play a crucial role in all banks. In this thesis we tackle a variety of problems, related to density forecasting, which are fundamental to market risk managers. The computation of risk measures (e.g. Value-at-Risk) for any portfolio of financial assets requires the generation of density forecasts for the driving risk factors. Appropriate testing procedures must then be identified for an accurate appraisal of these forecasts. We start our research by assessing whether option-implied densities, which constitute the most obvious forecasts of the distribution of the underlying asset at expiry, do actually represent unbiased forecasts. We first extract densities from options on currency and equity index futures, by means of both traditional and original specifications. We then appraise them, via rigorous density forecast evaluation tools, and we find evidence of the presence of biases. In the second part of the thesis, we focus on modelling the dynamics of the volatility curve, in order to measure the vega risk exposure for various delta-hedged option portfolios. We propose to use a linear Kalman filter approach, which gives more precise forecasts of the vega risk exposure than alternative, well-established models. In the third part, we derive a continuous time model for the dynamics of equity index returns from a data set of 5-minute returns. A model inferred from high-frequency typical of risk measures calculations. The last part of our work deals with evaluating density forecasts of the joint distribution of the risk factors. We find that, given certain specifications for the multivariate density forecast, a goodness-of-fit procedure based on the Empirical Characteristic Function displays good statistical properties in detecting misspecifications of different nature in the forecasts.EThOS - Electronic Theses Online ServiceWarwick Business School (WBS)GBUnited Kingdo

    Statistical field estimation and scale estimation for complex coastal regions and archipelagos

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2009.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 153-158).A fundamental requirement in realistic computational geophysical fluid dynamics is the optimal estimation of gridded fields and of spatial-temporal scales directly from the spatially irregular and multivariate data sets that are collected by varied instruments and sampling schemes. In this work, we derive and utilize new schemes for the mapping and dynamical inference of ocean fields in complex multiply-connected domains, study the computational properties of our new mapping schemes, and derive and investigate new schemes for adaptive estimation of spatial and temporal scales. Objective Analysis (OA) is the statistical estimation of fields using the Bayesian-based Gauss-Markov theorem, i.e. the update step of the Kalman Filter. The existing multi-scale OA approach of the Multidisciplinary Simulation, Estimation and Assimilation System consists of the successive utilization of Kalman update steps, one for each scale and for each correlation across scales. In the present work, the approach is extended to field mapping in complex, multiply-connected, coastal regions and archipelagos. A reasonably accurate correlation function often requires an estimate of the distance between data and model points, without going across complex land-forms. New methods for OA based on estimating the length of optimal shortest sea paths using the Level Set Method (LSM) and Fast Marching Method (FMM) are derived, implemented and utilized in general idealized and realistic ocean cases.(cont.) Our new methodologies could improve widely-used gridded databases such as the climatological gridded fields of the World Ocean Atlas (WOA) since these oceanic maps were computed without accounting for coastline constraints. A new FMM-based methodology for the estimation of absolute velocity under geostrophic balance in complicated domains is also outlined. Our new schemes are compared with other approaches, including the use of stochastically forced differential equations (SDE). We find that our FMM-based scheme for complex, multiply-connected, coastal regions is more efficient and accurate than the SDE approach. We also show that the field maps obtained using our FMM-based scheme do not require postprocessing (smoothing) of fields. The computational properties of the new mapping schemes are studied in detail. We find that higher-order schemes improve the accuracy of distance estimates. We also show that the covariance matrices we estimate are not necessarily positive definite because the Weiner Khinchin and Bochner relationships for positive deniteness are only valid for convex simply-connected domains. Several approaches to overcome this issue are discussed and qualitatively evaluated. The solutions we propose include introducing a small process noise or reducing the covariance matrix based on the dominant singular value decomposition.(cont.) We have also developed and utilized novel methodologies for the adaptive estimation of spatial-temporal scales from irregularly spaced ocean data. The three novel methodologies are based on the use of structure functions, short term Fourier transform and second generation wavelets. To our knowledge, this is the first time that adaptive methodologies for the spatial-temporal scale estimation are proposed. The ultimate goal of all these methods would be to create maps of spatial and temporal scales that evolve as new ocean data are fed to the scheme. This would potentially be a significant advance to the ocean community for better understanding and sampling of ocean processes.by Arpit Agarwal.S.M

    Signal processing techniques for the enhancement of marine seismic data

    Get PDF
    This thesis presents several signal processing techniques applied to the enhancement of marine seismic data. Marine seismic exploration provides an image of the Earth's subsurface from reflected seismic waves. Because the recorded signals are contaminated by various sources of noise, minimizing their effects with new attenuation techniques is necessary. A statistical analysis of background noise is conducted using Thomson’s multitaper spectral estimator and Parzen's amplitude density estimator. The results provide a statistical characterization of the noise which we use for the derivation of signal enhancement algorithms. Firstly, we focus on single-azimuth stacking methodologies and propose novel stacking schemes using either enhanced weighted sums or a Kalman filter. It is demonstrated that the enhanced methods yield superior results by their ability to exhibit cleaner and better defined reflected events as well as a larger number of reflections in deep waters. A comparison of the proposed stacking methods with existing ones is also discussed. We then address the problem of random noise attenuation and present an innovative application of sparse code shrinkage and independent component analysis. Sparse code shrinkage is a valuable method when a noise-free realization of the data is generated to provide data-driven shrinkages. Several models of distribution are investigated, but the normal inverse Gaussian density yields the best results. Other acceptable choices of density are discussed as well. Finally, we consider the attenuation of flow-generated nonstationary coherent noise and seismic interference noise. We suggest a multiple-input adaptive noise canceller that utilizes a normalized least mean squares alg orithm with a variable normalized step size derived as a function of instantaneous frequency. This filter attenuates the coherent noise successfully when used either by itself or in combination with a time-frequency median filter, depending on the noise spectrum and repartition along the data. Its application to seismic interference attenuation is also discussed

    Relationships of earthquakes (and earthquake-associated mass movements) and polar motion as determined by Kalman filtered, Very-Long-Baseline-Interferometry

    Get PDF
    A Kalman filter was designed to yield optimal estimates of geophysical parameters from Very Long Baseline Interferometry (VLBI) group delay data. The geophysical parameters are the polar motion components, adjustments to nutation in obliquity and longitude, and a change in the length of day parameter. The VLBI clock (and clock rate) parameters and atmospheric zenith delay parameters are estimated simultaneously. Filter background is explained. The IRIS (International Radio Interferometric Surveying) VLBI data are Kalman filtered. The resulting polar motion estimates are examined. There are polar motion signatures at the times of three large earthquakes occurring in 1984 to 1986: Mexico, 19 September, 1985 (Magnitude M sub s = 8.1); Chile, 3 March, 1985 (M sub s = 7.8); and Taiwan, 14 November, 1986 (M sub s = 7.8). Breaks in polar motion occurring about 20 days after the earthquakes appear to correlate well with the onset of increased regional seismic activity and a return to more normal seismicity (respectively). While the contribution of these three earthquakes to polar motion excitations is small, the cumulative excitation due to earthquakes, or seismic phenomena over a Chandler wobble damping period may be significant. Mechanisms for polar motion excitation due to solid earth phenomena are examined. Excitation functions are computed, but the data spans are too short to draw conclusions based on these data
    corecore