9,951 research outputs found

    Markov mezƑk a kĂ©pmodellezĂ©sben, alkalmazĂĄsuk az automatikus kĂ©pszegmentĂĄlĂĄs terĂŒletĂ©n = Markovian Image Models: Applications in Unsupervised Image Segmentation

    Get PDF
    1) KifejlesztettĂŒnk egy olyan szĂ­n Ă©s textĂșra alapĂș szegmentĂĄlĂł MRF algoritmust, amely alkalmas egy kĂ©p automatikus szegmentĂĄlĂĄsĂĄt elvĂ©gezni. Az eredmĂ©nyeinket az Image and Vision Computing folyĂłiratban publikĂĄltuk. 2) KifejlesztettĂŒnk egy Reversible Jump Markov Chain Monte Carlo technikĂĄn alapulĂł automatikus kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, melyet sikeresen alkalmaztunk szĂ­nes kĂ©pek teljesen automatikus szegmentĂĄlĂĄsĂĄra. Az eredmĂ©nyeinket a BMVC 2004 konferenciĂĄn Ă©s az Image and Vision Computing folyĂłiratban publikĂĄltuk. 3) A modell többrĂ©tegƱ tovĂĄbbfejlesztĂ©sĂ©t alkalmaztuk video objektumok szĂ­n Ă©s mozgĂĄs alapĂș szegmentĂĄlĂĄsĂĄra, melynek eredmĂ©nyeit a HACIPPR 2005 illetve az ACCV 2006 nemzetközi konferenciĂĄkon publikĂĄltuk. SzintĂ©n ehhez az alapproblĂ©mĂĄhoz kapcsolĂłdik HorvĂĄth PĂ©ter hallgatĂłmmal az optic flow szamĂ­tĂĄsĂĄval illetve szĂ­n, textĂșra Ă©s mozgĂĄs alapĂș GVF aktĂ­v kontĂșrral kapcsoltos munkĂĄink. TDK dolgozata elsƑ helyezĂ©st Ă©rt el a 2004-es helyi versenyen, az eredmĂ©nyeinket pedig a KEPAF 2004 konferenciĂĄn publikĂĄltuk. 4) HorvĂĄth PĂ©ter PhD hallgatĂłmmal illetve az franciaorszĂĄgi INRIA Ariana csoportjĂĄval, kidolgoztunk egy olyan kĂ©pszegmentĂĄlĂł eljĂĄrĂĄst, amely a szegmentĂĄlandĂł objektum alakjĂĄt is figyelembe veszi. Az eredmĂ©nyeinket az ICPR 2006 illetve az ICCVGIP 2006 konferenciĂĄn foglaltuk össze. A modell elƑzmĂ©nyekĂ©nt kidolgoztunk tovĂĄbbĂĄ egy alakzat-momemntumokon alapulĂł aktĂ­v kontĂșr modellt, amelyet a HACIPPR 2005 konferenciĂĄn publikĂĄltunk. | 1) We have proposed a monogrid MRF model which is able to combine color and texture features in order to improve the quality of segmentation results. We have also solved the estimation of model parameters. This work has been published in the Image and Vision Computing journal. 2) We have proposed an RJMCMC sampling method which is able to identify multi-dimensional Gaussian mixtures. Using this technique, we have developed a fully automatic color image segmentation algorithm. Our results have been published at BMVC 2004 international conference and in the Image and Vision Computing journal. 3) A new multilayer MRF model has been proposed which is able to segment an image based on multiple cues (such as color, texture, or motion). This work has been published at HACIPPR 2005 and ACCV 2006 international conferences. The work on optic flow computation and color-, texture-, and motion-based GVF active contours doen with my student, Mr. Peter Horvath, won a first price at the local Student Research Competition in 2004. Results have been presented at KEPAF 2004 conference. 4) A new shape prior, called 'gas of circles' has been introduced using active contour models. This work is done in collaboration with the Ariana group of INRIA, France and my PhD student, Mr. Peter Horvath. Results are published at the ICPR 2006 and ICCVGIP 2006 conferences. A preliminary study on active contour models using shape-moments has also been done, these results are published at HACIPPR 2005

    Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image

    Get PDF
    Linear spectral unmixing is a challenging problem in hyperspectral imaging that consists of decomposing an observed pixel into a linear combination of pure spectra (or endmembers) with their corresponding proportions (or abundances). Endmember extraction algorithms can be employed for recovering the spectral signatures while abundances are estimated using an inversion step. Recent works have shown that exploiting spatial dependencies between image pixels can improve spectral unmixing. Markov random fields (MRF) are classically used to model these spatial correlations and partition the image into multiple classes with homogeneous abundances. This paper proposes to define the MRF sites using similarity regions. These regions are built using a self-complementary area filter that stems from the morphological theory. This kind of filter divides the original image into flat zones where the underlying pixels have the same spectral values. Once the MRF has been clearly established, a hierarchical Bayesian algorithm is proposed to estimate the abundances, the class labels, the noise variance, and the corresponding hyperparameters. A hybrid Gibbs sampler is constructed to generate samples according to the corresponding posterior distribution of the unknown parameters and hyperparameters. Simulations conducted on synthetic and real AVIRIS data demonstrate the good performance of the algorithm

    Hybridization of neural network models for the prediction of Extreme Significant Wave Height segments

    Get PDF
    This work proposes a hybrid methodology for the detection and prediction of Extreme Significant Wave Height (ESWH) periods in oceans. In a first step, wave height time series is approximated by a labeled sequence of segments, which is obtained using a genetic algorithm in combination with a likelihood-based segmentation (GA+LS). Then, an artificial neural network classifier with hybrid basis functions is trained with a multiobjetive evolutionary algorithm (MOEA) in order to predict the occurrence of future ESWH segments based on past values. The methodology is applied to a buoy in the Gulf of Alaska and another one in Puerto Rico. The results show that the GA+LS is able to segment and group the ESWH values, and the neural network models, obtained by the MOEA, make good predictions maintaining a balance between global accuracy and minimum sensitivity for the detection of ESWH events. Moreover, hybrid neural networks are shown to lead to better results than pure models

    Time series data mining: preprocessing, analysis, segmentation and prediction. Applications

    Get PDF
    Currently, the amount of data which is produced for any information system is increasing exponentially. This motivates the development of automatic techniques to process and mine these data correctly. Specifically, in this Thesis, we tackled these problems for time series data, that is, temporal data which is collected chronologically. This kind of data can be found in many fields of science, such as palaeoclimatology, hydrology, financial problems, etc. TSDM consists of several tasks which try to achieve different objectives, such as, classification, segmentation, clustering, prediction, analysis, etc. However, in this Thesis, we focus on time series preprocessing, segmentation and prediction. Time series preprocessing is a prerequisite for other posterior tasks: for example, the reconstruction of missing values in incomplete parts of time series can be essential for clustering them. In this Thesis, we tackled the problem of massive missing data reconstruction in SWH time series from the Gulf of Alaska. It is very common that buoys stop working for different periods, what it is usually related to malfunctioning or bad weather conditions. The relation of the time series of each buoy is analysed and exploited to reconstruct the whole missing time series. In this context, EANNs with PUs are trained, showing that the resulting models are simple and able to recover these values with high precision. In the case of time series segmentation, the procedure consists in dividing the time series into different subsequences to achieve different purposes. This segmentation can be done trying to find useful patterns in the time series. In this Thesis, we have developed novel bioinspired algorithms in this context. For instance, for paleoclimate data, an initial genetic algorithm was proposed to discover early warning signals of TPs, whose detection was supported by expert opinions. However, given that the expert had to individually evaluate every solution given by the algorithm, the evaluation of the results was very tedious. This led to an improvement in the body of the GA to evaluate the procedure automatically. For significant wave height time series, the objective was the detection of groups which contains extreme waves, i.e. those which are relatively large with respect other waves close in time. The main motivation is to design alert systems. This was done using an HA, where an LS process was included by using a likelihood-based segmentation, assuming that the points follow a beta distribution. Finally, the analysis of similarities in different periods of European stock markets was also tackled with the aim of evaluating the influence of different markets in Europe. When segmenting time series with the aim of reducing the number of points, different techniques have been proposed. However, it is an open challenge given the difficulty to operate with large amounts of data in different applications. In this work, we propose a novel statistically-driven CRO algorithm (SCRO), which automatically adapts its parameters during the evolution, taking into account the statistical distribution of the population fitness. This algorithm improves the state-of-the-art with respect to accuracy and robustness. Also, this problem has been tackled using an improvement of the BBPSO algorithm, which includes a dynamical update of the cognitive and social components in the evolution, combined with mathematical tricks to obtain the fitness of the solutions, which significantly reduces the computational cost of previously proposed coral reef methods. Also, the optimisation of both objectives (clustering quality and approximation quality), which are in conflict, could be an interesting open challenge, which will be tackled in this Thesis. For that, an MOEA for time series segmentation is developed, improving the clustering quality of the solutions and their approximation. The prediction in time series is the estimation of future values by observing and studying the previous ones. In this context, we solve this task by applying prediction over high-order representations of the elements of the time series, i.e. the segments obtained by time series segmentation. This is applied to two challenging problems, i.e. the prediction of extreme wave height and fog prediction. On the one hand, the number of extreme values in SWH time series is less with respect to the number of standard values. In this way, the prediction of these values cannot be done using standard algorithms without taking into account the imbalanced ratio of the dataset. For that, an algorithm that automatically finds the set of segments and then applies EANNs is developed, showing the high ability of the algorithm to detect and predict these special events. On the other hand, fog prediction is affected by the same problem, that is, the number of fog events is much lower tan that of non-fog events, requiring a special treatment too. A preprocessing of different data coming from sensors situated in different parts of the Valladolid airport are used for making a simple ANN model, which is physically corroborated and discussed. The last challenge which opens new horizons is the estimation of the statistical distribution of time series to guide different methodologies. For this, the estimation of a mixed distribution for SWH time series is then used for fixing the threshold of POT approaches. Also, the determination of the fittest distribution for the time series is used for discretising it and making a prediction which treats the problem as ordinal classification. The work developed in this Thesis is supported by twelve papers in international journals, seven papers in international conferences, and four papers in national conferences

    Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery

    Get PDF
    This paper proposes a hierarchical Bayesian model that can be used for semi-supervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data

    Afromontane forest ecosystem studies with multi-source satellite data

    Get PDF
    The Afromontane Forest of north Eastern Nigeria is an important ecological ecosystem endowed with flora and fauna species. The main goals of this thesis were to explore the potential of multi-source satellite remote sensing for the assessment of the biodiversity-rich Afromontane Forest ecosystem using different methods and algorithms to retrieve two major remote sensing -essential biodiversity variables (RS-EBV) which are related and are also the major determinants of biological and ecosystem stability
    • 

    corecore