2,646 research outputs found

    Evidentialist Foundationalist Argumentation in Multi-Agent Systems

    Get PDF
    This dissertation focuses on the explicit grounding of reasoning in evidence directly sensed from the physical world. Based on evidence from human problem solving and successes, this is a straightforward basis for reasoning: to solve problems in the physical world, the information required for solving them must also come from the physical world. What is less straightforward is how to structure the path from evidence to conclusions. Many approaches have been applied to evidence-based reasoning, including probabilistic graphical models and Dempster-Shafer theory. However, with some exceptions, these traditional approaches are often employed to establish confidence in a single binary conclusion, like whether or not there is a blizzard, rather than developing complex groups of scalar conclusions, like where a blizzard's center is, what area it covers, how strong it is, and what components it has. To form conclusions of the latter kind, we employ and further develop the approach of Computational Argumentation. Specifically, this dissertation develops a novel approach to evidence-based argumentation called Evidentialist Foundationalist Argumentation (EFA). The method is a formal instantiation of the well-established Argumentation Service Platform with Integrated Components (ASPIC) framework. There are two primary approaches to Computational Argumentation. One approach is structured argumentation where arguments are structured with premises, inference rules, conclusions, and arguments based on the conclusions of other arguments, creating a tree-like structure. The other approach is abstract argumentation where arguments interact at a higher level through an attack relation. ASPIC unifies the two approaches. EFA instantiates ASPIC specifically for the purpose of reasoning about physical evidence in the form of sensor data. By restricting ASPIC specifically to sensor data, special philosophical and computational advantages are gained. Specifically, all premises in the system (evidence) can be treated as firmly grounded axioms and all arguments' conclusions can be numerically calculated directly from their premises. EFA could be used as the basis for well-justified, transparent reasoning in many domains including engineering, law, business, medicine, politics, and education. To test its utility as a basis for Computational Argumentation, we apply EFA to a Multi-Agent System working in the problem domain of Sensor Webs on the specific problem of Decentralized Sensor Fusion. In the Multi-Agent Decentralized Sensor Fusion problem, groups of individual agents are assigned to sensor stations that are distributed across a geographical area, forming a Sensor Web. The goal of the system is to strategically share sensor readings between agents to increase the accuracy of each individual agent's model of the geophysical sensing situation. For example, if there is a severe storm, a goal may be for each agent to have an accurate model of the storm's heading, severity, and focal points of activity. Also, since the agents are controlling a Sensor Web, another goal is to use communication judiciously so as to use power efficiently. To meet these goals, we design a Multi-Agent System called Investigative Argumentation-based Negotiating Agents (IANA). Agents in IANA use EFA as the basis for establishing arguments to model geophysical situations. Upon gathering evidence in the form of sensor readings, the agents form evidence-based arguments using EFA. The agents systematically compare the conclusions of their arguments to other agents. If the agents sufficiently agree on the geophysical situation, they end communication. If they disagree, then they share the evidence for their conclusions, consuming communication resources with the goal of increasing accuracy. They execute this interaction using a Share on Disagreement (SoD) protocol. IANA is evaluated against two other Multi-Agent System approaches on the basis of accuracy and communication costs, using historical real-world weather data. The first approach is all-to-all communication, called the Complete Data Sharing (CDS) approach. In this system, agents share all observations, maximizing accuracy but at a high communication cost. The second approach is based on Kalman Filtering of conclusions and is called the Conclusion Negotiation Only (CNO) approach. In this system, agents do not share any observations, and instead try to infer the geophysical state based only on each other's conclusions. This approach saves communication costs but sacrifices accuracy. The results of these experiments have been statistically analyzed using omega-squared effect sizes produced by ANOVA with p-values < 0.05. The IANA system was found to outperform the CDS system for message cost with high effect sizes. The CDS system outperformed the IANA system for accuracy with only small effect sizes. The IANA system was found to outperform the CNO system for accuracy with mostly high and medium effect sizes. The CNO system outperformed the IANA system for message costs with only small effect sizes. Given these results, the IANA system is preferable for most of the testing scenarios for the problem solved in this dissertation

    An ASIFT-based local registration method for satellite imagery

    Get PDF
    Imagery registration is a fundamental step, which greatly affects later processes in image mosaic, multi-spectral image fusion, digital surface modelling, etc., where the final solution needs blending of pixel information from more than one images. It is highly desired to find a way to identify registration regions among input stereo image pairs with high accuracy, particularly in remote sensing applications in which ground control points (GCPs) are not always available, such as in selecting a landing zone on an outer space planet. In this paper, a framework for localization in image registration is developed. It strengthened the local registration accuracy from two aspects: less reprojection error and better feature point distribution. Affine scale-invariant feature transform (ASIFT) was used for acquiring feature points and correspondences on the input images. Then, a homography matrix was estimated as the transformation model by an improved random sample consensus (IM-RANSAC) algorithm. In order to identify a registration region with a better spatial distribution of feature points, the Euclidean distance between the feature points is applied (named the S criterion). Finally, the parameters of the homography matrix were optimized by the Levenberg–Marquardt (LM) algorithm with selective feature points from the chosen registration region. In the experiment section, the Chang’E-2 satellite remote sensing imagery was used for evaluating the performance of the proposed method. The experiment result demonstrates that the proposed method can automatically locate a specific region with high registration accuracy between input images by achieving lower root mean square error (RMSE) and better distribution of feature points

    Hierarchical structure-and-motion recovery from uncalibrated images

    Full text link
    This paper addresses the structure-and-motion problem, that requires to find camera motion and 3D struc- ture from point matches. A new pipeline, dubbed Samantha, is presented, that departs from the prevailing sequential paradigm and embraces instead a hierarchical approach. This method has several advantages, like a provably lower computational complexity, which is necessary to achieve true scalability, and better error containment, leading to more stability and less drift. Moreover, a practical autocalibration procedure allows to process images without ancillary information. Experiments with real data assess the accuracy and the computational efficiency of the method.Comment: Accepted for publication in CVI

    Sensor Fusion For Cooperative Driving

    Get PDF
    The aim of this project is to modify, adapt, correct and test two target tracking algorithms to check their feasibility for future implementation in Advanced Driving Assistance Systems (ADAS). These systems, which range from automatic brake action to direct intervention in vehicle steering, require constant real-time monitoring of the environment (other cars, pedestrians, wild animals, etc.) and, in this respect, tracking algorithms have a crucial role to play, as they allow the continuous estimation of a target's trajectory in an accurate and efficient way. This project is the continuation of a project initiated by the Wireless Communications Research Unit of the Institute of Telecommunications of the TU Wien. As a starting point, two algorithms designed and implemented by the researchers working on the original project have been used. The first of these algorithms is a Particle Filter (PF) implemented in Python, developed to track a single target, while the second consists of a complex algorithm combining a Multiple Hypothesis Tracking (MHT) algorithm coupled to a Particle Filter (PF), also implemented in Python, with the intention of performing multiple target tracking. The project has been developed as follows. First, a random trajectory of a target was simulated in Matlab, using a random walk. Then, a Frequency Modulated Continuous Wave (FMCW) radar simulator, implemented in Matlab, developed by researchers at TU Wien, was used to perform the corresponding measurements. For the measurement process, a system consisting of four FMCW radars, placed in a square arrangement, was simulated. Finally, all data coming from the four radars was introduced into the two algorithms and combined by means of sensor fusion techniques in order to improve the quality of the trajectory estimates.El objetivo de este proyecto es modificar, adaptar, corregir y probar dos algoritmos de seguimiento de objetivos para comprobar su viabilidad de cara a su futura implantación en sistemas avanzados de asistencia a la conducción (ADAS). Estos sistemas, que van desde la actuación automática de los frenos hasta la intervención directa en la dirección del vehículo, requieren una monitorización constante en tiempo real del entorno (otros coches, peatones, animales salvajes, etc.) y, en este sentido, los algoritmos de seguimiento tienen un papel crucial, ya que permiten la estimación continua de la trayectoria de un objetivo de forma precisa y eficiente. Este proyecto es la continuación de un proyecto iniciado por la Unidad de Investigación de Comunicaciones Inalámbricas del Instituto de Telecomunicaciones de la TU Wien. Como punto de partida, se han utilizado dos algoritmos diseñados e implementados por los investigadores que trabajan en el proyecto original. El primero de estos algoritmos es un Filtro de Partículas (PF) implementado en Python, desarrollado para el seguimiento de un único objetivo, mientras que el segundo consiste en un complejo algoritmo que combina un algoritmo de Seguimiento de Hipótesis Múltiples (MHT) acoplado a un Filtro de Partículas (PF), también implementado en Python, con la intención de realizar el seguimiento de múltiples objetivos. El proyecto se ha desarrollado de la siguiente manera. En primer lugar, se simuló una trayectoria aleatoria de un objetivo en Matlab, utilizando un randomwalk. A continuación, se utilizó un simulador de radar de Onda Continua Modulada en Frecuencia (FMCW), implementado en Matlab, desarrollado por investigadores de TU Wien, para realizar las mediciones correspondientes. Para el proceso de medición, se simuló un sistema formado por cuatro radares FMCW, colocados en una disposición cuadrada. Por último, todos los datos procedentes de los cuatro radares se introdujeron en los dos algoritmos y se combinaron mediante técnicas de fusión de sensores para mejorar la calidad de las estimaciones de la trayectoria.L'objectiu d'aquest projecte és modificar, adaptar, corregir i provar dos algorismes de seguiment d'objectius per comprovar-ne la viabilitat de cara a la implantació futura en sistemes avançats d'assistència a la conducció (ADAS). Aquests sistemes, que van des de l'actuació automàtica dels frens fins a la intervenció directa a la direcció del vehicle, requereixen una monitorització constant en temps real de l'entorn (altres cotxes, vianants, animals salvatges, etc.) i, en aquest sentit, els algorismes de seguiment tenen un paper crucial, ja que permeten l'estimació contínua de la trajectòria d'un objectiu de manera precisa i eficient. Aquest projecte és la continuació d'un projecte iniciat per la Unitat de Recerca de Comunicacions Sense Fils de l'Institut de Telecomunicacions de la TU Wien. Com a punt de partida, s'han utilitzat dos algorismes dissenyats i implementats pels investigadors que treballen al projecte original. El primer d'aquests algoritmes és un Filtre de Partícules (PF) implementat a Python, desenvolupat per al seguiment d'un únic objectiu, mentre que el segon consisteix en un complex algorisme que combina un algorisme de seguiment d'hipòtesis múltiples (MHT) acoblat a un Filtre de Partícules (PF), també implementat a Python, amb la intenció de fer el seguiment de múltiples objectius. El projecte s'ha desenvolupat de la manera següent. En primer lloc, es va simular una trajectòria aleatòria d'un objectiu a Matlab, fent servir un randomwalk. A continuació, es va utilitzar un simulador de radar d'Onda Contínua Modulada en Freqüència (FMCW), implementat a Matlab, desenvolupat per investigadors de TU Wien, per realitzar els mesuraments corresponents. Per al procés de mesura, es va simular un sistema format per quatre radars FMCW, col·locats en una disposició quadrada. Finalment, totes les dades procedents dels quatre radars es van introduir als dos algoritmes i es van combinar mitjançant tècniques de fusió de sensors per millorar la qualitat de les estimacions de la trajectòria

    Machine learning methods for discriminating natural targets in seabed imagery

    Get PDF
    The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems. These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation. Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world sonar mosaic imagery. A number of technical challenges arose and these were surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation of pockmark and Sabellaria discrimination is feasible within this framework

    Assistive telehealth systems for neurorehabilitation

    Get PDF
    Telehealth is an evolving field within the broader domain of Biomedical Engineering, specifically situated within the context of the Internet of Medical Things (IoMT). In today's society, the importance of Telehealth systems is increasingly recognized, as they enable remote patient treatment by physicians. One significant application in neurorehabilitation is Transcranial Direct Current Stimulation (tDCS), which has demonstrated its effectiveness in modulating mental function and learning over several years. Furthermore, tDCS is widely accepted as a safe approach in the field. This presentation focuses on the development of a non-invasive wearable tDCS device with integrated Internet connectivity. This IoMT device enables remote configuration of treatment parameters, such as session duration, current level, and placebo status. Clinicians can remotely access the device and define these parameters within the approved safety ranges for tDCS treatments. In addition to the wearable tDCS device, a prototype web portal is being developed to collect performance data during neurorehabilitation exercises conducted by individuals at home. This portal also facilitates remote interaction between patients and clinicians. To provide a platform-independent solution for accessing up-to-date healthcare information, a Progressive Web Application (PWA) is being developed. The PWA enables real-time communication between patients and doctors through text chat and video conferencing. The primary objective is to create a cross-platform web application with PWA features that can function effectively as a native application in various operating systems
    corecore