5 research outputs found

    REAL TIME TRACKING OBYEK BERGERAK DENGAN WEBCAM BERBASIS WARNA DENGAN METODE BACKGROUND SUBTRACTION

    Get PDF
    Proses tracking obyek pada real time  video adalah salah satu topik yang penting dalam kajian suveillance system (Dhananjaya, Rama, and Thimmaiah 2015). deteksi dan ekstraksi informasi serta pelacakan obyek atau benda bergerak adalah sebagai salah satu bentuk aplikasi dari computer vision. Beberapa aplikasi yang memanfaatkan metode tracking object atau benda bergerak antara lain adalah UAV (Unmanned Aerial Vehicle) surveillance atau lebih dikenal dengan mesin/kendaraan tak berawak, Indoor Monitoring system adalah sistem monitoring keadaan dalam ruangan, serta memonitor trafik lalu lintas yang dapat mengamati pergerakan semua benda dalam keadaan real time. Tracking obyek dalam keadaan real time banyak hal yang perlu diperhatikan dan perlu diperhitungkan dimana semua parameter dan noise atau gangguan object di sekitarnya yang tidak perlu kita amati namun berada dalam satu bagian bersama obyek yang kita amati. Dalam penelitian ini metode yang akan digunakan adalah background subtraction untuk pendeteksian serta tracking obyek dan benda bergerak secara real time berbasis warna dengan memanfaatkan kamera webcam dan menggunakan pustaka opensource OpenCv

    Shadow Estimation Method for "The Episolar Constraint: Monocular Shape from Shadow Correspondence"

    Full text link
    Recovering shadows is an important step for many vision algorithms. Current approaches that work with time-lapse sequences are limited to simple thresholding heuristics. We show these approaches only work with very careful tuning of parameters, and do not work well for long-term time-lapse sequences taken over the span of many months. We introduce a parameter-free expectation maximization approach which simultaneously estimates shadows, albedo, surface normals, and skylight. This approach is more accurate than previous methods, works over both very short and very long sequences, and is robust to the effects of nonlinear camera response. Finally, we demonstrate that the shadow masks derived through this algorithm substantially improve the performance of sun-based photometric stereo compared to earlier shadow mask estimation

    Sample4Geo: Hard Negative Sampling For Cross-View Geo-Localisation

    Full text link
    Cross-View Geo-Localisation is still a challenging task where additional modules, specific pre-processing or zooming strategies are necessary to determine accurate positions of images. Since different views have different geometries, pre-processing like polar transformation helps to merge them. However, this results in distorted images which then have to be rectified. Adding hard negatives to the training batch could improve the overall performance but with the default loss functions in geo-localisation it is difficult to include them. In this article, we present a simplified but effective architecture based on contrastive learning with symmetric InfoNCE loss that outperforms current state-of-the-art results. Our framework consists of a narrow training pipeline that eliminates the need of using aggregation modules, avoids further pre-processing steps and even increases the generalisation capability of the model to unknown regions. We introduce two types of sampling strategies for hard negatives. The first explicitly exploits geographically neighboring locations to provide a good starting point. The second leverages the visual similarity between the image embeddings in order to mine hard negative samples. Our work shows excellent performance on common cross-view datasets like CVUSA, CVACT, University-1652 and VIGOR. A comparison between cross-area and same-area settings demonstrate the good generalisation capability of our model

    Leveraging Overhead Imagery for Localization, Mapping, and Understanding

    Get PDF
    Ground-level and overhead images provide complementary viewpoints of the world. This thesis proposes methods which leverage dense overhead imagery, in addition to sparsely distributed ground-level imagery, to advance traditional computer vision problems, such as ground-level image localization and fine-grained urban mapping. Our work focuses on three primary research areas: learning a joint feature representation between ground-level and overhead imagery to enable direct comparison for the task of image geolocalization, incorporating unlabeled overhead images by inferring labels from nearby ground-level images to improve image-driven mapping, and fusing ground-level imagery with overhead imagery to enhance understanding. The ultimate contribution of this thesis is a general framework for estimating geospatial functions, such as land cover or land use, which integrates visual evidence from both ground-level and overhead image viewpoints

    Webcam Geo-localization using Aggregate Light Levels

    No full text
    We consider the problem of geo-locating static cameras from long-term time-lapse imagery. This problem has received significant attention recently, with most methods making strong assumptions on the geometric structure of the scene. We explore a simple, robust cue that relates overall image intensity to the zenith angle of the sun (which need not be visible). We characterize the accuracy of geolocation based on this cue as a function of different models of the zenith-intensity relationship and the amount of imagery available. We evaluate our algorithm on a dataset of more than 60 million images captured from outdoor webcams located around the globe. We find that using our algorithm with images sampled every 30 minutes, yields localization errors of less than 100km for the majority of cameras. 1
    corecore