111 research outputs found

    THE DEVELOPMENT OF A HOLISTIC EXPERT SYSTEM FOR INTEGRATED COASTAL ZONE MANAGEMENT

    Get PDF
    Coastal data and information comprise a massive and complex resource, which is vital to the practice of Integrated Coastal Zone Management (ICZM), an increasingly important application. ICZM is just as complex, but uses the holistic paradigm to deal with the sophistication. The application domain and its resource require a tool of matching characteristics, which is facilitated by the current wide availability of high performance computing. An object-oriented expert system, COAMES, has been constructed to prove this concept. The application of expert systems to ICZM in particular has been flagged as a viable challenge and yet very few have taken it up. COAMES uses the Dempster- Shafer theory of evidence to reason with uncertainty and importantly introduces the power of ignorance and integration to model the holistic approach. In addition, object orientation enables a modular approach, embodied in the inference engine - knowledge base separation. Two case studies have been developed to test COAMES. In both case studies, knowledge has been successfully used to drive data and actions using metadata. Thus a holism of data, information and knowledge has been achieved. Also, a technological holism has been proved through the effective classification of landforms on the rapidly eroding Holderness coast. A holism across disciplines and CZM institutions has been effected by intelligent metadata management of a Fal Estuary dataset. Finally, the differing spatial and temporal scales that the two case studies operate at implicitly demonstrate a holism of scale, though explicit means of managing scale were suggested. In all cases the same knowledge structure was used to effectively manage and disseminate coastal data, information and knowledge

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches

    Decision fusion in healthcare and medicine : a narrative review

    Get PDF
    Objective: To provide an overview of the decision fusion (DF) technique and describe the applications of the technique in healthcare and medicine at prevention, diagnosis, treatment and administrative levels. Background: The rapid development of technology over the past 20 years has led to an explosion in data growth in various industries, like healthcare. Big data analysis within the healthcare systems is essential for arriving to a value-based decision over a period of time. Diversity and uncertainty in big data analytics have made it impossible to analyze data by using conventional data mining techniques and thus alternative solutions are required. DF is a form of data fusion techniques that could increase the accuracy of diagnosis and facilitate interpretation, summarization and sharing of information. Methods: We conducted a review of articles published between January 1980 and December 2020 from various databases such as Google Scholar, IEEE, PubMed, Science Direct, Scopus and web of science using the keywords decision fusion (DF), information fusion, healthcare, medicine and big data. A total of 141 articles were included in this narrative review. Conclusions: Given the importance of big data analysis in reducing costs and improving the quality of healthcare; along with the potential role of DF in big data analysis, it is recommended to know the full potential of this technique including the advantages, challenges and applications of the technique before its use. Future studies should focus on describing the methodology and types of data used for its applications within the healthcare sector

    Extracting Physical and Environmental Information of Irish Roads Using Airborne and Mobile Sensors

    Get PDF
    Airborne sensors including LiDAR and digital cameras are now used extensively for capturing topographical information as these are often more economical and efficient as compared to the traditional photogrammetric and land surveying techniques. Data captured using airborne sensors can be used to extract 3D information important for, inter alia, city modelling, land use classification and urban planning. According to the EU noise directive (2002/49/EC), the National Road Authority (NRA) in Ireland is responsible for generating noise models for all roads which are used by more than 8,000 vehicles per day. Accordingly, the NRA has to cover approximately 4,000 km of road, 500m on each side. These noise models have to be updated every 5 years. Important inputs to noise model are digital terrain model (DTM), 3D building data, road width, road centre line, ground surface type and noise barriers. The objective of this research was to extract these objects and topographical information using nationally available datasets acquired from the Ordnance Survey of Ireland (OSI). The OSI uses ALS50-II LiDAR and ADS40 digital sensors for capturing ground information. Both sensors rely on direct georeferencing, minimizing the need for ground control points. Before exploiting the complementary nature of both datasets for information extraction, their planimetric and vertical accuracies were evaluated using independent ground control points. A new method was also developed for registration in case of any mismatch. DSMs from LiDAR and aerial images were used to find common points to determine the parameters of 2D conformal transformation. The developed method was also evaluated by the EuroSDR in a project which involved a number of partners. These measures were taken to ensure that the inputs to the noise model were of acceptable accuracy as recommended in the report (Assessment of Exposure to Noise, 2006) by the European Working Group. A combination of image classification techniques was used to extract information by the fusion of LiDAR and aerial images. The developed method has two phases, viz. object classification and object reconstruction. Buildings and vegetation were classified based on Normalized Difference Vegetation Index (NDVI) and a normalized digital surface model (nDSM). Holes in building segments were filled by object-oriented multiresolution segmentation. Vegetation that remained amongst buildings was classified using cues obtained from LiDAR. The short comings there in were overcome by developing an additional classification cue using multiple returns. The building extents were extracted and assigned a single height value generated from LiDAR nDSM. The extracted height was verified against the ground truth data acquired using terrestrial survey techniques. Vegetation was further classified into three categories, viz. trees, hedges and tree clusters based on shape parameter (for hedges) and distance from neighbouring trees (for clusters). The ground was classified into three surface types i.e. roads and parking area, exposed surface and grass. This was done using LiDAR intensity, NDVI and nDSM. Mobile Laser Scanning (MLS) data was used to extract walls and purpose built noise barriers, since these objects were not extractable from the available airborne sensor data. Principal Component Analysis (PCA) was used to filter points belonging to such objects. A line was then fitted to these points using robust least square fitting. The developed object extraction method was tested objectively in two independent areas namely the Test Area-1 and the Test Area-2. The results were thoroughly investigated by three different accuracy assessment methods using the OSI vector data. The acceptance of any developed method for commercial applications requires completeness and correctness values of 85% and 70% respectively. Accuracy measures obtained using the developed method of object extraction recommend its applicability for noise modellin

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Robust perceptual organization techniques for analysis of color images

    Get PDF
    Esta tesis aborda el desarrollo de nuevas técnicas de análisis robusto de imágenes estrechamente relacionadas con el comportamiento del sistema visual humano. Uno de los pilares de la tesis es la votación tensorial, una técnica robusta que propaga y agrega información codificada en tensores mediante un proceso similar a la convolución. Su robustez y adaptabilidad han sido claves para su uso en esta tesis. Ambas propiedades han sido verificadas en tres nuevas aplicaciones de la votación tensorial: estimación de estructura, detección de bordes y segmentación de imágenes adquiridas mediante estereovisión.El mayor problema de la votación tensorial es su elevado coste computacional. En esta línea, esta tesis propone dos nuevas implementaciones eficientes de la votación tensorial derivadas de un análisis en profundidad de esta técnica.A pesar de su capacidad de adaptación, esta tesis muestra que la formulación original de la votación tensorial (a partir de aquí, votación tensorial clásica) no es adecuada para algunas aplicaciones, dado que las hipótesis en las que se basa no se ajustan a todas ellas. Esto ocurre particularmente en el filtrado de imágenes en color. Así, esta tesis muestra que, más que un método, la votación tensorial es una metodología en la que la codificación y el proceso de votación pueden ser adaptados específicamente para cada aplicación, manteniendo el espíritu de la votación tensorial.En esta línea, esta tesis propone un marco unificado en el que se realiza a la vez el filtrado de imágenes y la detección robusta de bordes. Este marco de trabajo es una extensión de la votación tensorial clásica en la que el color y la probabilidad de encontrar un borde en cada píxel se codifican mediante tensores, y en el que el proceso de votación se basa en un conjunto de criterios perceptuales relacionados con el modo en que el sistema visual humano procesa información. Los avances recientes en la percepción del color han sido esenciales en el diseño de dicho proceso de votación.Este nuevo enfoque ha sido efectivo, obteniendo excelentes resultados en ambas aplicaciones. En concreto, el nuevo método aplicado al filtrado de imágenes tiene un mejor rendimiento que los métodos del estado del arte para ruido real. Esto lo hace más adecuado para aplicaciones reales, donde los algoritmos de filtrado son imprescindibles. Además, el método aplicado a detección de bordes produce resultados más robustos que las técnicas del estado del arte y tiene un rendimiento competitivo con relación a la completitud, discriminabilidad, precisión y rechazo de falsas alarmas.Además, esta tesis demuestra que este nuevo marco de trabajo puede combinarse con otras técnicas para resolver el problema de segmentación robusta de imágenes. Los tensores obtenidos mediante el nuevo método se utilizan para clasificar píxeles como probablemente homogéneos o no homogéneos. Ambos tipos de píxeles se segmentan a continuación por medio de una variante de un algoritmo eficiente de segmentación de imágenes basada en grafos. Los experimentos muestran que el algoritmo propuesto obtiene mejores resultados en tres de las cinco métricas de evaluación aplicadas en comparación con las técnicas del estado del arte, con un coste computacional competitivo.La tesis también propone nuevas técnicas de evaluación en el ámbito del procesamiento de imágenes. En concreto, se proponen dos métricas de filtrado de imágenes con el fin de medir el grado en que un método es capaz de preservar los bordes y evitar la introducción de defectos. Asimismo, se propone una nueva metodología para la evaluación de detectores de bordes que evita posibles sesgos introducidos por el post-procesado. Esta metodología se basa en cinco métricas para estimar completitud, discriminabilidad, precisión, rechazo de falsas alarmas y robustez. Por último, se proponen dos nuevas métricas no paramétricas para estimar el grado de sobre e infrasegmentación producido por los algoritmos de segmentación de imágenes.This thesis focuses on the development of new robust image analysis techniques more closely related to the way the human visual system behaves. One of the pillars of the thesis is the so called tensor voting technique. This is a robust perceptual organization technique that propagates and aggregates information encoded by means of tensors through a convolution like process. Its robustness and adaptability have been one of the key points for using tensor voting in this thesis. These two properties are verified in the thesis by applying tensor voting to three applications where it had not been applied so far: image structure estimation, edge detection and image segmentation of images acquired through stereo vision.The most important drawback of tensor voting is that its usual implementations are highly time consuming. In this line, this thesis proposes two new efficient implementations of tensor voting, both derived from an in depth analysis of this technique.Despite its adaptability, this thesis shows that the original formulation of tensor voting (hereafter, classical tensor voting) is not adequate for some applications, since the hypotheses from which it is based are not suitable for all applications. This is particularly certain for color image denoising. Thus, this thesis shows that, more than a method, tensor voting can be thought of as a methodology in which the encoding and voting process can be tailored for every specific application, while maintaining the tensor voting spirit.By following this reasoning, this thesis proposes a unified framework for both image denoising and robust edge detection.This framework is an extension of the classical tensor voting in which both color and edginess the likelihood of finding an edge at every pixel of the image are encoded through tensors, and where the voting process takes into account a set of plausible perceptual criteria related to the way the human visual system processes visual information. Recent advances in the perception of color have been essential for designing such a voting process.This new approach has been found effective, since it yields excellent results for both applications. In particular, the new method applied to image denoising has a better performance than other state of the art methods for real noise. This makes it more adequate for real applications, in which an image denoiser is indeed required. In addition, the method applied to edge detection yields more robust results than the state of the art techniques and has a competitive performance in recall, discriminability, precision, and false alarm rejection.Moreover, this thesis shows how the results of this new framework can be combined with other techniques to tackle the problem of robust color image segmentation. The tensors obtained by applying the new framework are utilized to classify pixels into likely homogeneous and likely inhomogeneous. Those pixels are then sequentially segmented through a variation of an efficient graph based image segmentation algorithm. Experiments show that the proposed segmentation algorithm yields better scores in three of the five applied evaluation metrics when compared to the state of the art techniques with a competitive computational cost.This thesis also proposes new evaluation techniques in the scope of image processing. First, two new metrics are proposed in the field of image denoising: one to measure how an algorithm is able to preserve edges, and the second to measure how a method is able not to introduce undesirable artifacts. Second, a new methodology for assessing edge detectors that avoids possible bias introduced by post processing is proposed. It consists of five new metrics for assessing recall, discriminability, precision, false alarm rejection and robustness. Finally, two new non parametric metrics are proposed for estimating the degree of over and undersegmentation yielded by image segmentation algorithms

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    A new attribute measuring the contour smoothness of 2-D objects is presented in the context of morphological attribute filtering. The attribute is based on the ratio of the circularity and non-compactness, and has a maximum of 1 for a perfect circle. It decreases as the object boundary becomes irregular. Computation on hierarchical image representation structures relies on five auxiliary data members and is rapid. Contour smoothness is a suitable descriptor for detecting and discriminating man-made structures from other image features. An example is demonstrated on a very-high-resolution satellite image using connected pattern spectra and the switchboard platform
    corecore