221 research outputs found

    Geometric uncertainty models for correspondence problems in digital image processing

    Get PDF
    Many recent advances in technology rely heavily on the correct interpretation of an enormous amount of visual information. All available sources of visual data (e.g. cameras in surveillance networks, smartphones, game consoles) must be adequately processed to retrieve the most interesting user information. Therefore, computer vision and image processing techniques gain significant interest at the moment, and will do so in the near future. Most commonly applied image processing algorithms require a reliable solution for correspondence problems. The solution involves, first, the localization of corresponding points -visualizing the same 3D point in the observed scene- in the different images of distinct sources, and second, the computation of consistent geometric transformations relating correspondences on scene objects. This PhD presents a theoretical framework for solving correspondence problems with geometric features (such as points and straight lines) representing rigid objects in image sequences of complex scenes with static and dynamic cameras. The research focuses on localization uncertainty due to errors in feature detection and measurement, and its effect on each step in the solution of a correspondence problem. Whereas most other recent methods apply statistical-based models for spatial localization uncertainty, this work considers a novel geometric approach. Localization uncertainty is modeled as a convex polygonal region in the image space. This model can be efficiently propagated throughout the correspondence finding procedure. It allows for an easy extension toward transformation uncertainty models, and to infer confidence measures to verify the reliability of the outcome in the correspondence framework. Our procedure aims at finding reliable consistent transformations in sets of few and ill-localized features, possibly containing a large fraction of false candidate correspondences. The evaluation of the proposed procedure in practical correspondence problems shows that correct consistent correspondence sets are returned in over 95% of the experiments for small sets of 10-40 features contaminated with up to 400% of false positives and 40% of false negatives. The presented techniques prove to be beneficial in typical image processing applications, such as image registration and rigid object tracking

    A comparative analysis of terrestrial laser scanning (TLS) and structure from motion (SfM) photogrammetry for measuring fluvial sediments

    Get PDF
    A precise, time-efficient, cost-effective method for quantifying riverbed roughness and sediment size distribution has hitherto eluded river scientists. Traditional techniques (e.g., Wolman counts) have high potential for error brought about by operator bias and subjectivity when presented with complex facies assemblages, poor spatial coverage, insufficient sample sizes, and misrepresentation of bedforms. The application of LiDAR facilitated accurate observation of micro-scale habitats, and has been successfully employed in quantifying sediment grain size at the local level. However, despite considerable success of LiDAR instruments in remotely sensing riverine landscapes, and the obvious benefits they offer – very high spatial and temporal resolution, rapid data acquisition, and minimal disturbance in the field – procurement of these apparatus and their respective computer software comes at high financial cost, and extensive user training is generally necessary in order to operate such devices. Recent developments in computer software have led to advancements in digital photogrammetry over a broad range of scales, with Structure from Motion (SfM) techniques enabling production of precise DEMs based on point-clouds analogous to, and even denser than, those produced by LiDAR, at significantly reduced cost and convolution during post-processing. This study has employed both an SfM-photogrammetry and Terrestrial Laser Scanning (TLS) approach in a comparative analysis of sediment grain size, where LiDAR-derived data has previously provided a reliable reference of grain size. Total Station EDM theodolite provided the parent coordinate system for both SfM and meshing of TLS point-clouds. For each data set, a 0.19 m moving window (consistent with the largest sediment clast b axis) was applied to the resulting point-clouds. Two times standard deviation of elevation was calculated in order to provide a surrogate measure of grain protrusion, from which sediment frequency distribution curves were drawn. Results through semi- variance analyses elucidated continuity of each data set. Where univariate statistics failed to reveal disparity between the two data sets, semi-variance analysis exposed considerable variability in roughness, thus revealing a greater degree of detail in SfM- derived data

    The Matter of Future Heritage

    Get PDF
    In 2018, for the first time, the University of Bolognañ€ℱs Board of PhD in Architecture and Design Culture assigned second-year PhD students the task of developing and managing an international conference and publishing its works. The organisers of the first edition of this initiative ñ€“ Giacomo Corda, Pamela Lama, Viviana Lorenzo, Sara Maldina, Lia Marchi, Martina Massari and Giulia Custodi ñ€“ have chosen to leverage the solid relationship between the Department of Architecture and the Municipality of Bologna to publish a call having to do with the European Year of Cultural Heritage 2018, in which the Municipality was involved. The theme chosen for the call, The Matter of Future Heritage, set itself the ambitious goal of questioning the future of a field of research ñ€“ Cultural Heritage (CH) ñ€“ that is constantly being  redefined. A work that was made particularly complex in Europe by the development of the H2020 programme, where the topic entered, surprisingly, not as a protagonist but rather as an articulation of other subjects that in the vision of the programme seemed evidently more urgent and, one might say, dominant. The resulting tensions have been considerable and with both negative and positive implications, all the more evident if we refer to the issues that are closest to us namely the city and the landscape

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    Compréhension de contenus visuels par analyse conjointe du contenu et des usages

    Get PDF
    Dans cette thĂšse, nous traitons de la comprĂ©hension de contenus visuels, qu’il s’agisse d’images, de vidĂ©os ou encore de contenus 3D. On entend par comprĂ©hension la capacitĂ© Ă  infĂ©rer des informations sĂ©mantiques sur le contenu visuel. L’objectif de ce travail est d’étudier des mĂ©thodes combinant deux approches : 1) l’analyse automatique des contenus et 2) l’analyse des interactions liĂ©es Ă  l’utilisation de ces contenus (analyse des usages, en plus bref). Dans un premier temps, nous Ă©tudions l’état de l’art issu des communautĂ©s de la vision par ordinateur et du multimĂ©dia. Il y a 20 ans, l’approche dominante visait une comprĂ©hension complĂštement automatique des images. Cette approche laisse aujourd’hui plus de place Ă  diffĂ©rentes formes d’interventions humaines. Ces derniĂšres peuvent se traduire par la constitution d’une base d’apprentissage annotĂ©e, par la rĂ©solution interactive de problĂšmes (par exemple de dĂ©tection ou de segmentation) ou encore par la collecte d’informations implicites issues des usages du contenu. Il existe des liens riches et complexes entre supervision humaine d’algorithmes automatiques et adaptation des contributions humaines via la mise en Ɠuvre d’algorithmes automatiques. Ces liens sont Ă  l’origine de questions de recherche modernes : comment motiver des intervenants humains ? Comment concevoir des scĂ©narii interactifs pour lesquels les interactions contribuent Ă  comprendre le contenu manipulĂ© ? Comment vĂ©rifier la qualitĂ© des traces collectĂ©es ? Comment agrĂ©ger les donnĂ©es d’usage ? Comment fusionner les donnĂ©es d’usage avec celles, plus classiques, issues d’une analyse automatique ? Notre revue de la littĂ©rature aborde ces questions et permet de positionner les contributions de cette thĂšse. Celles-ci s’articulent en deux grandes parties. La premiĂšre partie de nos travaux revisite la dĂ©tection de rĂ©gions importantes ou saillantes au travers de retours implicites d’utilisateurs qui visualisent ou acquiĂšrent des con- tenus visuels. En 2D d’abord, plusieurs interfaces de vidĂ©os interactives (en particulier la vidĂ©o zoomable) sont conçues pour coordonner des analyses basĂ©es sur le contenu avec celles basĂ©es sur l’usage. On gĂ©nĂ©ralise ces rĂ©sultats en 3D avec l’introduction d’un nouveau dĂ©tecteur de rĂ©gions saillantes dĂ©duit de la capture simultanĂ©e de vidĂ©os de la mĂȘme performance artistique publique (spectacles de danse, de chant etc.) par de nombreux utilisateurs. La seconde contribution de notre travail vise une comprĂ©hension sĂ©mantique d’images fixes. Nous exploitons les donnĂ©es rĂ©coltĂ©es Ă  travers un jeu, Ask’nSeek, que nous avons crĂ©Ă©. Les interactions Ă©lĂ©mentaires (comme les clics) et les donnĂ©es textuelles saisies par les joueurs sont, comme prĂ©cĂ©demment, rapprochĂ©es d’analyses automatiques des images. Nous montrons en particulier l’intĂ©rĂȘt d’interactions rĂ©vĂ©latrices des relations spatiales entre diffĂ©rents objets dĂ©tectables dans une mĂȘme scĂšne. AprĂšs la dĂ©tection des objets d’intĂ©rĂȘt dans une scĂšne, nous abordons aussi le problĂšme, plus ambitieux, de la segmentation. ABSTRACT : This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation

    Proceedings of the 9th Arab Society for Computer Aided Architectural Design (ASCAAD) international conference 2021 (ASCAAD 2021): architecture in the age of disruptive technologies: transformation and challenges.

    Get PDF
    The ASCAAD 2021 conference theme is Architecture in the age of disruptive technologies: transformation and challenges. The theme addresses the gradual shift in computational design from prototypical morphogenetic-centered associations in the architectural discourse. This imminent shift of focus is increasingly stirring a debate in the architectural community and is provoking a much needed critical questioning of the role of computation in architecture as a sole embodiment and enactment of technical dimensions, into one that rather deliberately pursues and embraces the humanities as an ultimate aspiration

    Artwork as Network: A Reconceptualization of the Work of Art and its Exhibition

    Get PDF
    As it reshapes the world we inhabit, the concept of the network has emerged as the dominant cultural paradigm across numerous fields and disciplines. Whether biological, social, political, global, communicational, or computational, networks are constituted by a decentered, distributed, multiplicitous, nonlinear system of nodes, plateaus, and edges that are endlessly interconnected and interdependent. Networks prioritize relationships between things over the things themselves, suggesting a reconfiguring of binary elements including: digital/tactile, virtual/material, private/public, and past/present. As networks rapidly change our world, it is logical to assume that contemporary artistic practices are impacted as well. In fact, works of art are uniquely situated to discover and reveal new ways of understanding social and cultural phenomena including that of the network. Several questions arise: How do contemporary works of art relate to network culture? Alternately, how do networks redefine our understanding of specific works of art? How, in turn, are these works expanding our understanding of the network? As a way of focusing these questions, the dissertation addresses works by four contemporary artists: Franklin Evans, Simon Starling, Jenny Odell, and Pablo Helguera. Based on close art historical analysis, I argue that instead of depicting, illustrating or referring to networks as context, the works discussed are constituted or composed in and as networks. They are dynamic relational forms in which the work of art and the network are rendered indissociable from one another. I further claim, that components which were previously considered as existing outside of the work of art – the gallery, the studio, references to texts, histories, artworks, historic objects, other artists, place, and even public programs and participants – are now part of what constitutes the work, thus indicating a profound shift in perspective in what we consider the “work of art” and the ways in which it is addressed and interpreted.https://digitalmaine.com/academic/1024/thumbnail.jp

    Combining content analysis with usage analysis to better understand visual contents

    Get PDF
    This thesis focuses on the problem of understanding visual contents, which can be images, videos or 3D contents. Understanding means that we aim at inferring semantic information about the visual content. The goal of our work is to study methods that combine two types of approaches: 1) automatic content analysis and 2) an analysis of how humans interact with the content (in other words, usage analysis). We start by reviewing the state of the art from both Computer Vision and Multimedia communities. Twenty years ago, the main approach was aiming at a fully automatic understanding of images. This approach today gives way to different forms of human intervention, whether it is through the constitution of annotated datasets, or by solving problems interactively (e.g. detection or segmentation), or by the implicit collection of information gathered from content usages. These different types of human intervention are at the heart of modern research questions: how to motivate human contributors? How to design interactive scenarii that will generate interactions that contribute to content understanding? How to check or ensure the quality of human contributions? How to aggregate human contributions? How to fuse inputs obtained from usage analysis with traditional outputs from content analysis? Our literature review addresses these questions and allows us to position the contributions of this thesis. In our first set of contributions we revisit the detection of important (or salient) regions through implicit feedback from users that either consume or produce visual contents. In 2D, we develop several interfaces of interactive video (e.g. zoomable video) in order to coordinate content analysis and usage analysis. We also generalize these results to 3D by introducing a new detector of salient regions that builds upon simultaneous video recordings of the same public artistic performance (dance show, chant, etc.) by multiple users. The second contribution of our work aims at a semantic understanding of fixed images. With this goal in mind, we use data gathered through a game, Ask’nSeek, that we created. Elementary interactions (such as clicks) together with textual input data from players are, as before, mixed with automatic analysis of images. In particular, we show the usefulness of interactions that help revealing spatial relations between different objects in a scene. After studying the problem of detecting objects on a scene, we also adress the more ambitious problem of segmentation
    • 

    corecore