1,284 research outputs found

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    Visual sequence-based place recognition for changing conditions and varied viewpoints

    Get PDF
    Correctly identifying previously-visited locations is essential for robotic place recognition and localisation. This thesis presents training-free solutions to vision-based place recognition under changing environmental conditions and camera viewpoints. Using vision as a primary sensor, the proposed approaches combine image segmentation and rescaling techniques over sequences of visual imagery to enable successful place recognition over a range of challenging environments where prior techniques have failed

    Da li dron postaje novi „aparat dominacije“?: nadzor bojišta u ratovanju dvadeset prvog veka

    Get PDF
    The paper looks at the military use of burgeoning technologies of the Fourth Industrial Revolution in designing the visual regime of the drone as a tool for control of combat efficiency in twenty-first-century warfare. The author posits his analysis in critical theory and critical war/military studies with focus on the operationally relevant use of technical properties of the visual regime of drone observed through a wealth of video material uploaded to YouTube and related to the ongoing war in Ukraine. While many analyses delve into the combined practices of intelligence gathering, targeting, and killing aimed at the enemy, the author investigates how recent combat practices unveil the potential for an emerging role of drone surveillance: the scrutinization of combat performance of one’s own soldiers. In the age of a highly professionalized and industrialized warfare, inherent to the politics of military interventionism aimed at maintaining liberal peace across the globe, the shift towards a pervasive control over the combat “assembly line” reconstitutes technological character of the drone so that it becomes an apparatus of domination. The author concludes that the drone as mobile platform for surveillance displays hidden potentials to reinforce the existing relations of domination and cautions that the advent of nano-drones could socially constitute far more intrusive and intimate control of ground troops.Rad pruža uvid u upotrebu naprednih tehnologija Četvrte industrijske revolucije u vojne svrhe na planu osmišljavanja i konstruisanja vizuelnog režima drona kao oruđa za kontrolu efikasnosti borbenog dejstva. Autor smešta analizu u okvire kritičke teorije i kritičkih studija rata, sa težištem na operativno relevantnim načinima upotrebe tehničkih karakteristika vizuelnog režima drona, a zasnovanu na obilju video materijala dostupnog na YouTube-u vezanog za tekući rat u Ukrajini. Za razliku od brojnih analiza posvećenih kombinovanim praksama prikupljanja obaveštajnih podataka, ciljanja i ubijanja usmerenih na neprijatelja, autor istražuje kako nove borbene prakse otkrivaju potencijale za novu ulogu nadzora dronovima: temeljna provera borbenog učinka sopstvenih vojnika. U doba visoko profesionalizovanog i industrijalizovanog ratovanja, svojstvenog politici vojnog intervencionizma usmerenom na održavanje liberalnog mira širom planete, preusmeravanje ka sveobuhvatnoj kontroli nad borbenom „pokretnom trakom“ rekonstituiše tehnološki karakter drona tako da on postaje aparat dominacije. Autor zaključuje da dron kao mobilna platforma za nadzor ima skrivene potencijale da ojača postojeće odnose dominacije i upozorava da bi uvođenje nano dronova u redovnu vojnu upotrebu moglo da predstavlja sveprožimajuću kontrolu kopnenih trupa na daleko intimnijem nivou

    Global Wheat Head Detection (GWHD) dataset: a large and diverse dataset of high resolution RGB labelled images to develop and benchmark wheat head detection methods

    Get PDF
    Detection of wheat heads is an important task allowing to estimate pertinent traits including head population density and head characteristics such as sanitary state, size, maturity stage and the presence of awns. Several studies developed methods for wheat head detection from high-resolution RGB imagery. They are based on computer vision and machine learning and are generally calibrated and validated on limited datasets. However, variability in observational conditions, genotypic differences, development stages, head orientation represents a challenge in computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse and well-labelled dataset, the Global Wheat Head detection (GWHD) dataset. It contains 4,700 high-resolution RGB images and 190,000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD is publicly available at http://www.global-wheat.com/ and aimed at developing and benchmarking methods for wheat head detection.Comment: 16 pages, 7 figures, Dataset pape

    "Enriching 360-degree technologies through human-computer interaction: psychometric validation of two memory tasks"

    Get PDF
    This doctoral dissertation explores the domain of neuropsychological assessment, with the objective of gaining a comprehensive understanding of an individual's cognitive functioning and detecting possible impairments. Traditional assessment tools, while possessing inherent value, frequently exhibit a deficiency in ecological validity when evaluating memory, as they predominantly concentrate on short-term, regulated tasks. To overcome this constraint, immersive technologies, specifically virtual reality and 360° videos, have surfaced as promising instruments for augmenting the ecological validity of cognitive assessments. This work examines the potential advantages of immersive technologies, particularly 360° videos, in enhancing memory evaluation. First, a comprehensive overview of contemporary virtual reality tools employed in the assessment of memory, as well as their convergence with conventional assessment measures has been provided. Then, the present study utilizes cluster and network analysis techniques to categorize 360° videos according to their content and applications, thereby offering significant insights into the potential of this nascent medium. The study introduces then a novel platform, Mindscape, that aims to address the existing technological disparity, thereby enhancing the accessibility of clinicians and researchers in developing cognitive tasks within immersive environments. The conclusion of the thesis encompasses the psychometric validation of two memory tasks, which have been specifically developed with Mindscape to assess episodic and spatial memory. The findings demonstrate disparities in cognitive performance between individuals diagnosed with Mild Cognitive Impairment and those without cognitive impairments, underscoring the interrelated nature of cognitive processes and the promising prospects of virtual reality technology in improving the authenticity of real-world experiences. Overall, this dissertation aims to respond to the demand for practical and ecologically valid neuropsychological assessments within the dynamic field of neuropsychology. It achieves this by integrating user-friendly platforms and immersive cognitive tasks into its methodology. By highlighting a shift in the field of neuropsychology towards prioritizing functional and practical assessments over theoretical frameworks, this work indicates a changing perspective within the discipline. This study highlights the potential of comprehensive and purpose-oriented assessment methods in cognitive evaluations, emphasizing the ongoing significance of research in fully comprehending the capabilities of immersive technologies

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison
    corecore