76 research outputs found

    An Approach to Automatic Selection of the Optimal Local Feature Detector

    Get PDF
    Feature matching techniques have significantly contributed in making vision applications more reliable by solving the image correspondence problem. The feature matching process requires an effective feature detection stage capable of providing high quality interest points. The effort of the research community in this field has produced a wide number of different approaches to the problem of feature detection. However, imaging conditions influence the performance of a feature detector, making it suitable only for a limited range of applications. This thesis aims to improve the reliability and effectiveness of feature detection by proposing an approach for the automatic selection of the optimal feature detector in relation to the input image characteristics. Having knowledge of how the imaging conditions will influence a feature detector's performance is fundamental to this research. Thus, the behaviour of feature detectors under varying image changes and in relation to the scene content is investigated. The results obtained through analysis allowed to make the first but important step towards a fully adaptive selection method of the optimal feature detector for any given operating condition

    Data Efficient Visual Place Recognition Using Extremely JPEG-Compressed Images

    Full text link
    Visual Place Recognition (VPR) is the ability of a robotic platform to correctly interpret visual stimuli from its on-board cameras in order to determine whether it is currently located in a previously visited place, despite different viewpoint, illumination and appearance changes. JPEG is a widely used image compression standard that is capable of significantly reducing the size of an image at the cost of image clarity. For applications where several robotic platforms are simultaneously deployed, the visual data gathered must be transmitted remotely between each robot. Hence, JPEG compression can be employed to drastically reduce the amount of data transmitted over a communication channel, as working with limited bandwidth for VPR can be proven to be a challenging task. However, the effects of JPEG compression on the performance of current VPR techniques have not been previously studied. For this reason, this paper presents an in-depth study of JPEG compression in VPR related scenarios. We use a selection of well-established VPR techniques on 8 datasets with various amounts of compression applied. We show that by introducing compression, the VPR performance is drastically reduced, especially in the higher spectrum of compression. To overcome the negative effects of JPEG compression on the VPR performance, we present a fine-tuned CNN which is optimized for JPEG compressed data and show that it performs more consistently with the image transformations detected in extremely compressed JPEG images.Comment: 8 pages, 8 figure

    Performance comparison of image feature detectors utilizing a large number of scenes

    Get PDF
    Selecting the most suitable local invariant feature detector for a particular application has rendered the task of evaluating feature detectors a critical issue in vi sion research. No state-of-the-art image feature detector works satisfactorily under all types of image transformations. Although the literature offers a variety of comparison works focusing on performance evaluation of image feature detectors under several types of image transformation, the influence of the scene content on the performance of local feature detectors has received little attention so far. This paper aims to bridge this gap with a new framework for determining the type of scenes, which maximize and minimize the performance of detectors in terms of repeatability rate. Several state-of-the-art feature detectors have been assessed utilizing a large database of 12936 images generated by applying uniform light and blur changes to 539 scenes captured from the real world. The results obtained provide new insights into the behaviour of feature detectors

    Expression of μ-protocadherin is negatively regulated by the activation of the β-catenin signaling pathway in normal and cancer colorectal enterocytes.

    Get PDF
    Mu-protocadherin (MUCDHL) is an adhesion molecule predominantly expressed by colorectal epithelial cells which is markedly downregulated upon malignant transformation. Notably, treatment of colorectal cancer (CRC) cells with mesalazine lead to increased expression of MUCDHL, and is associated with sequestration of β-catenin on the plasma membrane and inhibition of its transcriptional activity. To better characterize the causal relationship between β-catenin and MUCDHL expression, we performed various experiments in which CRC cell lines and normal colonic organoids were subjected to culture conditions inhibiting (FH535 treatment, transcription factor 7-like 2 siRNA inactivation, Wnt withdrawal) or stimulating (LiCl treatment) β-catenin activity. We show here that expression of MUCDHL is negatively regulated by functional activation of the β-catenin signaling pathway. This finding was observed in cell culture systems representing conditions of physiological stimulation and upon constitutive activation of β-catenin in CRC. The ability of MUCDHL to sequester and inhibit β-catenin appears to provide a positive feedback enforcing the effect of β-catenin inhibitors rather than serving as the primary mechanism responsible for β-catenin inhibition. Moreover, MUCDHL might have a role as biomarker in the development of CRC chemoprevention drugs endowed with β-catenin inhibitory activity

    Automatic Selection of the Optimal Local Feature Detector

    Get PDF
    A large number of different local feature detectors have been proposed in the last few years. However, each feature detector has its own strengths and weaknesses that limit its use to a specific range of applications. In this paper is presented a tool capable of quickly analysing input images to determine which type and amount of transformation is applied to them and then selecting the optimal feature detector, which is expected to perform the best. The results show that the performance and the fast execution time render the proposed tool suitable for real-world vision applications

    Exploring Performance Bounds of Visual Place Recognition Using Extended Precision

    Get PDF
    Recent advances in image description and matching allowed significant improvements in Visual Place Recognition (VPR). The wide variety of methods proposed so far and the increase of the interest in the field have rendered the problem of evaluating VPR methods an important task. As part of the localization process, VPR is a critical stage for many robotic applications and it is expected to perform reliably in any location of the operating environment. To design more reliable and effective localization systems this letter presents a generic evaluation framework based on the new Extended Precision performance metric for VPR. The proposed framework allows assessment of the upper and lower bounds of VPR performance and finds statistically significant performance differences between VPR methods. The proposed evaluation method is used to assess several state-of-the-art techniques with a variety of imaging conditions that an autonomous navigation system commonly encounters on long term runs. The results provide new insights into the behaviour of different VPR methods under varying conditions and help to decide which technique is more appropriate to the nature of the venture or the task assigned to an autonomous robot

    Visual Place Recognition for Aerial Robotics: Exploring Accuracy-Computation Trade-off for Local Image Descriptors

    Get PDF
    Visual Place Recognition (VPR) is a fundamental yet challenging task for small Unmanned Aerial Vehicle (UAV). The core reasons are the extreme viewpoint changes, and limited computational power onboard a UAV which restricts the applicability of robust but computation intensive state-ofthe-art VPR methods. In this context, a viable approach is to use local image descriptors for performing VPR as these can be computed relatively efficiently without the need of any special hardware, such as a GPU. However, the choice of a local feature descriptor is not trivial and calls for a detailed investigation as there is a trade-off between VPR accuracy and the required computational effort. To fill this research gap, this paper examines the performance of several state-of-the-art local feature descriptors, both from accuracy and computational perspectives, specifically for VPR application utilizing standard aerial datasets. The presented results confirm that a trade-off between accuracy and computational effort is inevitable while executing VPR on resource-constrained hardware

    An Efficient and Scalable Collection of Fly-Inspired Voting Units for Visual Place Recognition in Changing Environments

    Get PDF
    State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead visual place recognition techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophila melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent visual place recognition compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms ofarea under the precision-recall curve results and computational efficiency

    Aggregating Multiple Bio-Inspired Image Region Classifiers for Effective and Lightweight Visual Place Recognition

    Get PDF
    Visual place recognition (VPR) enables autonomous systems to localize themselves within an environment using image information. While VPR techniques built upon a Convolutional Neural Network (CNN) backbone dominate state-of-the-art VPR performance, their high computational requirements make them unsuitable for platforms equipped with low-end hardware. Recently, a lightweight VPR system based on multiple bio-inspired classifiers, dubbed DrosoNets, has been proposed, achieving great computational efficiency at the cost of reduced absolute place retrieval performance. In this letter, we propose a novel multi-DrosoNet localization system, dubbed RegionDrosoNet, with significantly improved VPR performance, while preserving a low-computational profile. Our approach relies on specializing distinct groups of DrosoNets on differently sliced partitions of the original images, increasing model differentiation. Furthermore, we introduce a novel voting module to combine the outputs of all DrosoNets into the final place prediction which considers multiple top reference candidates from each DrosoNet. RegionDrosoNet outperforms other lightweight VPR techniques when dealing with both appearance changes and viewpoint variations. Moreover, it competes with computationally expensive methods on some benchmark datasets at a small fraction of their online inference time
    corecore