380 research outputs found

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Edge-preserving Multiscale Image Decomposition based on Local Extrema

    Get PDF
    We propose a new model for detail that inherently captures oscillations, a key property that distinguishes textures from individual edges. Inspired by techniques in empirical data analysis and morphological image analysis, we use the local extrema of the input image to extract information about oscillations: We define detail as oscillations between local minima and maxima. Building on the key observation that the spatial scale of oscillations are characterized by the density of local extrema, we develop an algorithm for decomposing images into multiple scales of superposed oscillations. Current edge-preserving image decompositions assume image detail to be low contrast variation. Consequently they apply filters that extract features with increasing contrast as successive layers of detail. As a result, they are unable to distinguish between high-contrast, fine-scale features and edges of similar contrast that are to be preserved. We compare our results with existing edge-preserving image decomposition algorithms and demonstrate exciting applications that are made possible by our new notion of detail

    Edge-preserving multiscale image decomposition based on local extrema

    Full text link

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    A Study of Biomedical Time Series Using Empirical Mode Decomposition : Extracting event-related modes from EEG signals recorded during visual processing of contour stimuli

    Get PDF
    Noninvasive neuroimaging techniques like functional Magnetic Resonance Imaging (fMRI) and/or Electroencephalography (EEG) allow researchers to investigate and analyze brain activities during visual processing. EEG offers a high temporal resolution at a level of submilliseconds which can be combined favorably with fMRI which has a good spatial resolution on small spatial scales in the millimeter range. These neuroimaging techniques were, and still are instrumental in the diagnoses and treatments of neurological disorders in the clinical applications. In this PhD thesis we concentrate on lectrophysiological signatures within EEG recordings of a combined EEG-fMRI data set which where taken while performing a contour integration task. The estimation of location and distribution of the electrical sources in the brain from surface recordings which are responsible for interesting EEG waves has drawn the attention of many EEG/MEG researchers. However, this process which is called brain source localization is still one of the major problems in EEG. It consists of solving two modeling problems: forward and inverse. In the forward problem, one is interested in predicting the expected potential distribution on the scalp from given electrical sources that represent active neurons in the head. These evaluations are necessary to solve the inverse problem which can be defined as the problem of estimating the brain sources that generated the measured electrical potentials. This thesis presents a data-driven analysis of EEG data recorded during a combined EEG/fMRI study of visual processing during a contour integration task. The analysis is based on an ensemble empirical mode decomposition (EEMD) and discusses characteristic features of event related modes (ERMs) resulting from the decomposition. We identify clear differences in certain ERMs in response to contour vs non-contour Gabor stimuli mainly for response amplitudes peaking around 100 [ms] (called P100) and 200 [ms] (called N200) after stimulus onset, respectively. We observe early P100 and N200 responses at electrodes located in the occipital area of the brain, while late P100 and N200 responses appear at electrodes located in frontal brain areas. Signals at electrodes in central brain areas show bimodal early/late response signatures in certain ERMs. Head topographies clearly localize statistically significant response differences to both stimulus conditions. Our findings provide an independent proof of recent models which suggest that contour integration depends on distributed network activity within the brain. Next and based on the previous analysis, a new approach for source localization of EEG data based on combining ERMs, extracted with EEMD, with inverse models has been presented. As the first step, 64 channel EEG recordings are pooled according to six brain areas and decomposed, by applying an EEMD, into their underlying ERMs. Then, based upon the problem at hand, the most closely related ERM, in terms of frequency and amplitude, is combined with inverse modeling techniques for source localization. More specifically, the standardized low resolution brain electromagnetic tomography (sLORETA) procedure is employed in this work. Accuracy and robustness of the results indicate that this approach deems highly promising in source localization techniques for EEG data. Given the results of analyses above, it can be said that EMD is able to extract intrinsic signal modes, ERMs, which contain decisive information about responses to contour and non-contour stimuli. Hence, we introduce a new toolbox, called EMDLAB, which serves the growing interest of the signal processing community in applying EMD as a decomposition technique. EMDLAB can be used to perform, easily and effectively, four common types of EMD: plain EMD, ensemble EMD (EEMD), weighted sliding EMD (wSEMD) and multivariate EMD (MEMD) on the EEG data. The main goal of EMDLAB toolbox is to extract characteristics of either the EEG signal by intrinsic mode functions (IMFs) or ERMs. Since IMFs reflect characteristics of the original EEG signal, ERMs reflect characteristics of ERPs of the original signal. The new toolbox is provided as a plug-in to the well-known EEGLAB which enables it to exploit the advantageous visualization capabilities of EEGLAB as well as statistical data analysis techniques provided there for extracted IMFs and ERMs of the signal

    Image Simulation in Remote Sensing

    Get PDF
    Remote sensing is being actively researched in the fields of environment, military and urban planning through technologies such as monitoring of natural climate phenomena on the earth, land cover classification, and object detection. Recently, satellites equipped with observation cameras of various resolutions were launched, and remote sensing images are acquired by various observation methods including cluster satellites. However, the atmospheric and environmental conditions present in the observed scene degrade the quality of images or interrupt the capture of the Earth's surface information. One method to overcome this is by generating synthetic images through image simulation. Synthetic images can be generated by using statistical or knowledge-based models or by using spectral and optic-based models to create a simulated image in place of the unobtained image at a required time. Various proposed methodologies will provide economical utility in the generation of image learning materials and time series data through image simulation. The 6 published articles cover various topics and applications central to Remote sensing image simulation. Although submission to this Special Issue is now closed, the need for further in-depth research and development related to image simulation of High-spatial and spectral resolution, sensor fusion and colorization remains.I would like to take this opportunity to express my most profound appreciation to the MDPI Book staff, the editorial team of Applied Sciences journal, especially Ms. Nimo Lang, the assistant editor of this Special Issue, talented authors, and professional reviewers

    Binary Technetium Halides

    Full text link
    In this work, the synthetic and coordination chemistry as well as the physico-chemical properties of binary technetium (Tc) chlorides, bromides, and iodides were investigated. Resulting from these studies was the discovery of five new binary Tc halide phases: α/β-TcCl3, α/β-TcCl2, and TcI3, and the reinvestigation of the chemistries of TcBr3 and TcX4 (X = Cl, Br). Prior to 2009, the chemistry of binary Tc halides was poorly studied and defined by only three compounds, i.e., TcF6, TcF5, and TcCl4. Today, ten phases are known (i.e., TcF6, TcF5, TcCl4, TcBr4, TcBr3, TcI3, α/β-TcCl3 and α/β-TcCl2) making the binary halide system of Tc comparable to those of its neighboring elements. Technetium binary halides were synthesized using three methods: reactions of the elements in sealed tubes, reactions of flowing HX(g) (X = Cl, Br, and I) with Tc2(O2CCH3)4Cl2, and thermal decompositions of TcX4 (X = Cl, Br) and α-TcCl3 in sealed tubes under vacuum. Binary Tc halides can be found in various dimensionalities such as molecular solids (TcF6), extended chains (TcF5, TcCl4, α/β-TcCl2, TcBr3, TcI3), infinite layers (β-TcCl3), and bidimensional networks of clusters (α-TcCl3); eight structure-types with varying degrees of metal-metal interactions are now known. The coordination chemistry of Tc binary halides can resemble that of the adjacent elements: molybdenum and ruthenium (β-TcCl3, TcBr3, TcI3), rhenium (TcF5, α-TcCl3), platinum (TcCl4, TcBr4), or can be unique (α-TcCl2 and β-TcCl2) in respect to other known transition metal binary halides. Technetium binary halides display a range of interesting physical properties that are manifested from their electronic and structural configurations. The thermochemistry of binary Tc halides is extensive. These compounds can selectively volatilize, decompose, disproportionate, or convert to other phases. Ultimately, binary Tc halides may find application in the nuclear fuel cycle and as precursors in inorganic and organometallic chemistry

    Multi-dimensional characterization of soil surface roughness for microwave remote sensing applications

    Get PDF

    High Performance Video Stream Analytics System for Object Detection and Classification

    Get PDF
    Due to the recent advances in cameras, cell phones and camcorders, particularly the resolution at which they can record an image/video, large amounts of data are generated daily. This video data is often so large that manually inspecting it for object detection and classification can be time consuming and error prone, thereby it requires automated analysis to extract useful information and meta-data. The automated analysis from video streams also comes with numerous challenges such as blur content and variation in illumination conditions and poses. We investigate an automated video analytics system in this thesis which takes into account the characteristics from both shallow and deep learning domains. We propose fusion of features from spatial frequency domain to perform highly accurate blur and illumination invariant object classification using deep learning networks. We also propose the tuning of hyper-parameters associated with the deep learning network through a mathematical model. The mathematical model used to support hyper-parameter tuning improved the performance of the proposed system during training. The outcomes of various hyper-parameters on system's performance are compared. The parameters that contribute towards the most optimal performance are selected for the video object classification. The proposed video analytics system has been demonstrated to process a large number of video streams and the underlying infrastructure is able to scale based on the number and size of the video stream(s) being processed. The extensive experimentation on publicly available image and video datasets reveal that the proposed system is significantly more accurate and scalable and can be used as a general purpose video analytics system.N/
    • …
    corecore