6,438 research outputs found

    A proposal for the implementation of a parallel watershed algorithm

    Full text link

    Implementation of watershed based image segmentation algorithm in FPGA

    Get PDF
    The watershed algorithm is a commonly used method of solving the image segmentation problem. However, of the many variants of the watershed algorithm not all are equally well suited for hardware implementation. Different algorithms are studied and the watershed algorithm based on connected components is selected for the implementation, as it exhibits least computational complexity, good segmentation quality and can be implemented in the FPGA. It has simplified memory access compared to all other watershed based image segmentation algorithms. This thesis proposes a new hardware implementation of the selected watershed algorithm. The main aim of the thesis is to implement image segmentation algorithm in a FPGA which requires minimum hardware resources, low execution time and is suitable for use in real time applications. A pipelined architecture of algorithm is designed, implemented in VHDL and synthesized for Xilinx Virtex-4 FPGA. In the implementation, image is loaded to external memory and algorithm is repeatedly applied to the image. To overcome the problem of over-segmentation, pre-processing step is used before the segmentation and implemented in the pipelined architecture. The pipelined architecture of pre-processing stage can be operated at up to 228 MHz. The computation time for a 512 x 512 image is about 35 to 45 ms using one pipelined segmentation unit. A proposal of parallel architecture is discussed which uses multiple segmentation units and is fast enough for the real time applications. The implemented and proposed architectures are excellent candidates to use for different applications where high speed performance is needed

    Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    Get PDF
    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled differential evolution adaptive Metropolis (DREAM), that is especially designed to efficiently estimate the posterior probability density function of hydrologic model parameters in complex, high-dimensional sampling problems. This MCMC scheme adaptively updates the scale and orientation of the proposal distribution during sampling and maintains detailed balance and ergodicity. It is then demonstrated how DREAM can be used to analyze forcing data error during watershed model calibration using a five-parameter rainfall-runoff model with streamflow data from two different catchments. Explicit treatment of precipitation error during hydrologic model calibration not only results in prediction uncertainty bounds that are more appropriate but also significantly alters the posterior distribution of the watershed model parameters. This has significant implications for regionalization studies. The approach also provides important new ways to estimate areal average watershed precipitation, information that is of utmost importance for testing hydrologic theory, diagnosing structural errors in models, and appropriately benchmarking rainfall measurement devices

    Improving Simulation Efficiency of MCMC for Inverse Modeling of Hydrologic Systems with a Kalman-Inspired Proposal Distribution

    Full text link
    Bayesian analysis is widely used in science and engineering for real-time forecasting, decision making, and to help unravel the processes that explain the observed data. These data are some deterministic and/or stochastic transformations of the underlying parameters. A key task is then to summarize the posterior distribution of these parameters. When models become too difficult to analyze analytically, Monte Carlo methods can be used to approximate the target distribution. Of these, Markov chain Monte Carlo (MCMC) methods are particularly powerful. Such methods generate a random walk through the parameter space and, under strict conditions of reversibility and ergodicity, will successively visit solutions with frequency proportional to the underlying target density. This requires a proposal distribution that generates candidate solutions starting from an arbitrary initial state. The speed of the sampled chains converging to the target distribution deteriorates rapidly, however, with increasing parameter dimensionality. In this paper, we introduce a new proposal distribution that enhances significantly the efficiency of MCMC simulation for highly parameterized models. This proposal distribution exploits the cross-covariance of model parameters, measurements and model outputs, and generates candidate states much alike the analysis step in the Kalman filter. We embed the Kalman-inspired proposal distribution in the DREAM algorithm during burn-in, and present several numerical experiments with complex, high-dimensional or multi-modal target distributions. Results demonstrate that this new proposal distribution can greatly improve simulation efficiency of MCMC. Specifically, we observe a speed-up on the order of 10-30 times for groundwater models with more than one-hundred parameters

    Equifinality of formal (DREAM) and informal (GLUE) Bayesian approaches in hydrologic modeling?

    Get PDF
    In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented using the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchment

    Spectral-spatial classification of n-dimensional images in real-time based on segmentation and mathematical morphology on GPUs

    Get PDF
    The objective of this thesis is to develop efficient schemes for spectral-spatial n-dimensional image classification. By efficient schemes, we mean schemes that produce good classification results in terms of accuracy, as well as schemes that can be executed in real-time on low-cost computing infrastructures, such as the Graphics Processing Units (GPUs) shipped in personal computers. The n-dimensional images include images with two and three dimensions, such as images coming from the medical domain, and also images ranging from ten to hundreds of dimensions, such as the multiand hyperspectral images acquired in remote sensing. In image analysis, classification is a regularly used method for information retrieval in areas such as medical diagnosis, surveillance, manufacturing and remote sensing, among others. In addition, as the hyperspectral images have been widely available in recent years owing to the reduction in the size and cost of the sensors, the number of applications at lab scale, such as food quality control, art forgery detection, disease diagnosis and forensics has also increased. Although there are many spectral-spatial classification schemes, most are computationally inefficient in terms of execution time. In addition, the need for efficient computation on low-cost computing infrastructures is increasing in line with the incorporation of technology into everyday applications. In this thesis we have proposed two spectral-spatial classification schemes: one based on segmentation and other based on wavelets and mathematical morphology. These schemes were designed with the aim of producing good classification results and they perform better than other schemes found in the literature based on segmentation and mathematical morphology in terms of accuracy. Additionally, it was necessary to develop techniques and strategies for efficient GPU computing, for example, a block–asynchronous strategy, resulting in an efficient implementation on GPU of the aforementioned spectral-spatial classification schemes. The optimal GPU parameters were analyzed and different data partitioning and thread block arrangements were studied to exploit the GPU resources. The results show that the GPU is an adequate computing platform for on-board processing of hyperspectral information

    Three-Dimensional GPU-Accelerated Active Contours for Automated Localization of Cells in Large Images

    Full text link
    Cell segmentation in microscopy is a challenging problem, since cells are often asymmetric and densely packed. This becomes particularly challenging for extremely large images, since manual intervention and processing time can make segmentation intractable. In this paper, we present an efficient and highly parallel formulation for symmetric three-dimensional (3D) contour evolution that extends previous work on fast two-dimensional active contours. We provide a formulation for optimization on 3D images, as well as a strategy for accelerating computation on consumer graphics hardware. The proposed software takes advantage of Monte-Carlo sampling schemes in order to speed up convergence and reduce thread divergence. Experimental results show that this method provides superior performance for large 2D and 3D cell segmentation tasks when compared to existing methods on large 3D brain images
    • …
    corecore