1,912 research outputs found

    Visible and Near Infrared Image Fusion Based on Texture Information

    Full text link
    Multi-sensor fusion is widely used in the environment perception system of the autonomous vehicle. It solves the interference caused by environmental changes and makes the whole driving system safer and more reliable. In this paper, a novel visible and near-infrared fusion method based on texture information is proposed to enhance unstructured environmental images. It aims at the problems of artifact, information loss and noise in traditional visible and near infrared image fusion methods. Firstly, the structure information of the visible image (RGB) and the near infrared image (NIR) after texture removal is obtained by relative total variation (RTV) calculation as the base layer of the fused image; secondly, a Bayesian classification model is established to calculate the noise weight and the noise information and the noise information in the visible image is adaptively filtered by joint bilateral filter; finally, the fused image is acquired by color space conversion. The experimental results demonstrate that the proposed algorithm can preserve the spectral characteristics and the unique information of visible and near-infrared images without artifacts and color distortion, and has good robustness as well as preserving the unique texture.Comment: 10 pages,11 figure

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Détection de changement par fusion d'images de télédétection de résolutions et modalités différentes

    Get PDF
    La détection de changements dans une scène est l’un des problèmes les plus complexes en télédétection. Il s’agit de détecter des modifications survenues dans une zone géographique donnée par comparaison d’images de cette zone acquises à différents instants. La comparaison est facilitée lorsque les images sont issues du même type de capteur c’est-à-dire correspondent à la même modalité (le plus souvent optique multi-bandes) et possèdent des résolutions spatiales et spectrales identiques. Les techniques de détection de changements non supervisées sont, pour la plupart, conçues spécifiquement pour ce scénario. Il est, dans ce cas, possible de comparer directement les images en calculant la différence de pixels homologues, c’est-à-dire correspondant au même emplacement au sol. Cependant, dans certains cas spécifiques tels que les situations d’urgence, les missions ponctuelles, la défense et la sécurité, il peut s’avérer nécessaire d’exploiter des images de modalités et de résolutions différentes. Cette hétérogénéité dans les images traitées introduit des problèmes supplémentaires pour la mise en œuvre de la détection de changements. Ces problèmes ne sont pas traités par la plupart des méthodes de l’état de l’art. Lorsque la modalité est identique mais les résolutions différentes, il est possible de se ramener au scénario favorable en appliquant des prétraitements tels que des opérations de rééchantillonnage destinées à atteindre les mêmes résolutions spatiales et spectrales. Néanmoins, ces prétraitements peuvent conduire à une perte d’informations pertinentes pour la détection de changements. En particulier, ils sont appliqués indépendamment sur les deux images et donc ne tiennent pas compte des relations fortes existant entre les deux images. L’objectif de cette thèse est de développer des méthodes de détection de changements qui exploitent au mieux l’information contenue dans une paire d’images observées, sans condition sur leur modalité et leurs résolutions spatiale et spectrale. Les restrictions classiquement imposées dans l’état de l’art sont levées grâce à une approche utilisant la fusion des deux images observées. La première stratégie proposée s’applique au cas d’images de modalités identiques mais de résolutions différentes. Elle se décompose en trois étapes. La première étape consiste à fusionner les deux images observées ce qui conduit à une image de la scène à haute résolution portant l’information des changements éventuels. La deuxième étape réalise la prédiction de deux images non observées possédant des résolutions identiques à celles des images observées par dégradation spatiale et spectrale de l’image fusionnée. Enfin, la troisième étape consiste en une détection de changements classique entre images observées et prédites de mêmes résolutions. Une deuxième stratégie modélise les images observées comme des versions dégradées de deux images non observées caractérisées par des résolutions spectrales et spatiales identiques et élevées. Elle met en œuvre une étape de fusion robuste qui exploite un a priori de parcimonie des changements observés. Enfin, le principe de la fusion est étendu à des images de modalités différentes. Dans ce cas où les pixels ne sont pas directement comparables, car correspondant à des grandeurs physiques différentes, la comparaison est réalisée dans un domaine transformé. Les deux images sont représentées par des combinaisons linéaires parcimonieuses des éléments de deux dictionnaires couplés, appris à partir des données. La détection de changements est réalisée à partir de l’estimation d’un code couplé sous condition de parcimonie spatiale de la différence des codes estimés pour chaque image. L’expérimentation de ces différentes méthodes, conduite sur des changements simulés de manière réaliste ou sur des changements réels, démontre les avantages des méthodes développées et plus généralement de l’apport de la fusion pour la détection de changement

    Blind restoration of images with penalty-based decision making : a consensus approach

    Get PDF
    In this thesis we show a relationship between fuzzy decision making and image processing . Various applications for image noise reduction with consensus methodology are introduced. A new approach is introduced to deal with non-stationary Gaussian noise and spatial non-stationary noise in MRI

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    Neuro-fuzzy control modelling for gas metal arc welding process

    Get PDF
    Weld quality features are difficult or impossible to directly measure and control during welding, therefore indirect methods are necessary. Penetration is the most important geometric feature since in most applications it is the most significant factor affecting joint strength. Observation of penetration is only possible from the back face of the full penetration weld. In all other cases, since direct measurement of depth of penetration is not possible, real time control of penetration in the Gas Metal Arc Welding (GMAW) process by sensing conditions at the top surface of the joint is necessary. This continues to be a major area of interest for automation of the process. The objective of this research has been to develop an on-line intelligent process control model for GMAW, which can monitor and control the welding process. The model uses measurement of the temperature at a point on the surface of the workpiece to predict the depth of penetration being achieved, and to provide feedback for corrective adjustment of welding variables. Neural Network and Fuzzy Logic technologies have been used to achieve a reliable Neuro-Fuzzy control model for GMAW of a typical closed butt joint having 60° Vee edge preparation. The neural network model predicts the surface temperature expected for a set of fixed and adjustable welding variables when a prescribed level of penetration is achieved. This predicted temperature is compared with the actual surface temperature occurring during welding, as measured by an infrared sensor. If there is a difference between the measured temperature and the temperature predicted by the neural network, a fuzzy logic model will recommend changes to the adjustable welding variables necessary to achieve the desired weld penetration. Large scale experiments to obtain data for modelling and for model validation, and various other modelling studies are described. The results are used to establish the relationships between the output surface temperature measurement, welding variables and the corresponding achieved weld quality criteria. The effectiveness of the modelling methodology in dealing with fixed or variable root gap has also been tested. The result shows that the Neuro-fuzzy models are capable of providing control of penetration to an acceptable degree of accuracy, and a potential control response time, using modestly powerful computing hardware, of the order of one hundred milliseconds. This is more than adequate for real time control of GMAW. The application potential for control using these models is significant since, unlike many other top surface monitoring methods, it does not require sensing of the highly transient weld pool shape or surface

    REVISIÓN DE TÉCNICAS DE SISTEMAS DE VISIÓN ARTIFICIAL PARA LA INSPECCIÓN DE PROCESOS DE SOLDADURA TIPO GMAW

    Get PDF
    El proceso de soldadura GMAW es ampliamente estudiado debido a su alta productividad y bajo costo. En este trabajo se han revisado las investigaciones orientadas a la inspección del proceso de GMAW a través de sistemas de visión artificial con el objetivo de establecer los principales elementos utilizados en estos sistemas destacando dos categorías: métodos computacionales (software y algoritmos generales), materiales y modelos matemáticos (métodos estadísticos y numéricos). Estas categorías se traslapan en el estudio y se han utilizado para evaluar el costo en términos de recursos humanos y recursos económicos. Las investigaciones revisadas se desarrollaron en la última década, con la excepción de algunas investigaciones que desempeñaron un papel principal en el desarrollo de los sistemas de inspección de los procesos GMAW. Finalmente, se han destacado los posibles campos de investigación para aquellos que intentan explorar sistemas de visión artificial para inspección de procesos GMAW.Palabras clave: GMAW, soldadura, visión artificial, inspección

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing
    corecore