116,388 research outputs found
Sub MeV Particles Detection and Identification in the MUNU detector ((1)ISN, IN2P3/CNRS-UJF, Grenoble, France, (2)Institut de Physique, Neuch\^atel, Switzerland, (3) INFN, Padova Italy, (4) Physik-Institut, Z\"{u}rich, Switzerland)
We report on the performance of a 1 m TPC filled with CF at 3
bar, immersed in liquid scintillator and viewed by photomultipliers. Particle
detection, event identification and localization achieved by measuring both the
current signal and the scintillation light are presented. Particular features
of particle detection are also discussed. Finally, the Mn
photopeak, reconstructed from the Compton scattering and recoil angle is shown.Comment: Latex, 19 pages, 20 figure
Localization Recall Precision (LRP): A New Performance Metric for Object Detection
Average precision (AP), the area under the recall-precision (RP) curve, is
the standard performance measure for object detection. Despite its wide
acceptance, it has a number of shortcomings, the most important of which are
(i) the inability to distinguish very different RP curves, and (ii) the lack of
directly measuring bounding box localization accuracy. In this paper, we
propose 'Localization Recall Precision (LRP) Error', a new metric which we
specifically designed for object detection. LRP Error is composed of three
components related to localization, false negative (FN) rate and false positive
(FP) rate. Based on LRP, we introduce the 'Optimal LRP', the minimum achievable
LRP error representing the best achievable configuration of the detector in
terms of recall-precision and the tightness of the boxes. In contrast to AP,
which considers precisions over the entire recall domain, Optimal LRP
determines the 'best' confidence score threshold for a class, which balances
the trade-off between localization and recall-precision. In our experiments, we
show that, for state-of-the-art object (SOTA) detectors, Optimal LRP provides
richer and more discriminative information than AP. We also demonstrate that
the best confidence score thresholds vary significantly among classes and
detectors. Moreover, we present LRP results of a simple online video object
detector which uses a SOTA still image object detector and show that the
class-specific optimized thresholds increase the accuracy against the common
approach of using a general threshold for all classes. At
https://github.com/cancam/LRP we provide the source code that can compute LRP
for the PASCAL VOC and MSCOCO datasets. Our source code can easily be adapted
to other datasets as well.Comment: to appear in ECCV 201
Localization recall precision (LRP): A new performance metric for object detection
Average precision (AP), the area under the recall-precision (RP) curve, is the standard performance measure for object detection. Despite its wide acceptance, it has a number of shortcomings, the most important of which are (i) the inability to distinguish very different RP curves, and (ii) the lack of directly measuring bounding box localization accuracy. In this paper, we propose “Localization Recall Precision (LRP) Error”, a new metric specifically designed for object detection. LRP Error is composed of three components related to localization, false negative (FN) rate and false positive (FP) rate. Based on LRP, we introduce the “Optimal LRP” (oLRP), the minimum achievable LRP error representing the best achievable configuration of the detector in terms of recall-precision and the tightness of the boxes. In contrast to AP, which considers precisions over the entire recall domain, oLRP determines the “best” confidence score threshold for a class, which balances the trade-off between localization and recall-precision. In our experiments, we show that oLRP provides richer and more discriminative information than AP. We also demonstrate that the best confidence score thresholds vary significantly among classes and detectors. Moreover, we present LRP results of a simple online video object detector and show that the class-specific optimized thresholds increase the accuracy against the common approach of using a general threshold for all classes. Our experiments demonstrate that LRP is more competent than AP in capturing the performance of detectors. Our source code for PASCAL VOC AND MSCOCO datasets are provided at https://github.com/cancam/LRP
DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
The impact of soiling on solar panels is an important and well-studied
problem in renewable energy sector. In this paper, we present the first
convolutional neural network (CNN) based approach for solar panel soiling and
defect analysis. Our approach takes an RGB image of solar panel and
environmental factors as inputs to predict power loss, soiling localization,
and soiling type. In computer vision, localization is a complex task which
typically requires manually labeled training data such as bounding boxes or
segmentation masks. Our proposed approach consists of specialized four stages
which completely avoids localization ground truth and only needs panel images
with power loss labels for training. The region of impact area obtained from
the predicted localization masks are classified into soiling types using the
webly supervised learning. For improving localization capabilities of CNNs, we
introduce a novel bi-directional input-aware fusion (BiDIAF) block that
reinforces the input at different levels of CNN to learn input-specific feature
maps. Our empirical study shows that BiDIAF improves the power loss prediction
accuracy by about 3% and localization accuracy by about 4%. Our end-to-end
model yields further improvement of about 24% on localization when learned in a
weakly supervised manner. Our approach is generalizable and showed promising
results on web crawled solar panel images. Our system has a frame rate of 22
fps (including all steps) on a NVIDIA TitanX GPU. Additionally, we collected
first of it's kind dataset for solar panel image analysis consisting 45,000+
images.Comment: Accepted for publication at WACV 201
LocNet: Improving Localization Accuracy for Object Detection
We propose a novel object localization methodology with the purpose of
boosting the localization accuracy of state-of-the-art object detection
systems. Our model, given a search region, aims at returning the bounding box
of an object of interest inside this region. To accomplish its goal, it relies
on assigning conditional probabilities to each row and column of this region,
where these probabilities provide useful information regarding the location of
the boundaries of the object inside the search region and allow the accurate
inference of the object bounding box under a simple probabilistic framework.
For implementing our localization model, we make use of a convolutional
neural network architecture that is properly adapted for this task, called
LocNet. We show experimentally that LocNet achieves a very significant
improvement on the mAP for high IoU thresholds on PASCAL VOC2007 test set and
that it can be very easily coupled with recent state-of-the-art object
detection systems, helping them to boost their performance. Finally, we
demonstrate that our detection approach can achieve high detection accuracy
even when it is given as input a set of sliding windows, thus proving that it
is independent of box proposal methods.Comment: Extended technical report -- short version to appear as oral paper on
CVPR 2016. Code: https://github.com/gidariss/LocNet
A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry
The demand for autonomous vehicles is increasing gradually owing to their
enormous potential benefits. However, several challenges, such as vehicle
localization, are involved in the development of autonomous vehicles. A simple
and secure algorithm for vehicle positioning is proposed herein without
massively modifying the existing transportation infrastructure. For vehicle
localization, vehicles on the road are classified into two categories: host
vehicles (HVs) are the ones used to estimate other vehicles' positions and
forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV
transmits modulated data from the tail (or back) light, and the camera of the
HV receives that signal using optical camera communication (OCC). In addition,
the streetlight (SL) data are considered to ensure the position accuracy of the
HV. Determining the HV position minimizes the relative position variation
between the HV and FV. Using photogrammetry, the distance between FV or SL and
the camera of the HV is calculated by measuring the occupied image area on the
image sensor. Comparing the change in distance between HV and SLs with the
change in distance between HV and FV, the positions of FVs are determined. The
performance of the proposed technique is analyzed, and the results indicate a
significant improvement in performance. The experimental distance measurement
validated the feasibility of the proposed scheme
- …