272 research outputs found
Three dimensional information estimation and tracking for moving objects detection using two cameras framework
Calibration, matching and tracking are major concerns to obtain 3D information consisting of depth, direction and velocity. In finding depth, camera parameters and matched points are two necessary inputs. Depth, direction and matched points can be achieved accurately if cameras are well calibrated using manual traditional calibration. However, most of the manual traditional calibration methods are inconvenient to use because markers or real size of an object in the real world must be provided or known. Self-calibration can solve the traditional calibration limitation, but not on depth and matched points. Other approaches attempted to match corresponding object using 2D visual information without calibration, but they suffer low matching accuracy under huge perspective distortion. This research focuses on achieving 3D information using self-calibrated tracking system. In this system, matching and tracking are done under self-calibrated condition. There are three contributions introduced in this research to achieve the objectives. Firstly, orientation correction is introduced to obtain better relationship matrices for matching purpose during tracking. Secondly, after having relationship matrices another post-processing method, which is status based matching, is introduced for improving object matching result. This proposed matching algorithm is able to achieve almost 90% of matching rate. Depth is estimated after the status based matching. Thirdly, tracking is done based on x-y coordinates and the estimated depth under self-calibrated condition. Results show that the proposed self-calibrated tracking system successfully differentiates the location of objects even under occlusion in the field of view, and is able to determine the direction and the velocity of multiple moving objects
Efficient Privacy Preserving Viola-Jones Type Object Detection via Random Base Image Representation
A cloud server spent a lot of time, energy and money to train a Viola-Jones
type object detector with high accuracy. Clients can upload their photos to the
cloud server to find objects. However, the client does not want the leakage of
the content of his/her photos. In the meanwhile, the cloud server is also
reluctant to leak any parameters of the trained object detectors. 10 years ago,
Avidan & Butman introduced Blind Vision, which is a method for securely
evaluating a Viola-Jones type object detector. Blind Vision uses standard
cryptographic tools and is painfully slow to compute, taking a couple of hours
to scan a single image. The purpose of this work is to explore an efficient
method that can speed up the process. We propose the Random Base Image (RBI)
Representation. The original image is divided into random base images. Only the
base images are submitted randomly to the cloud server. Thus, the content of
the image can not be leaked. In the meanwhile, a random vector and the secure
Millionaire protocol are leveraged to protect the parameters of the trained
object detector. The RBI makes the integral-image enable again for the great
acceleration. The experimental results reveal that our method can retain the
detection accuracy of that of the plain vision algorithm and is significantly
faster than the traditional blind vision, with only a very low probability of
the information leakage theoretically.Comment: 6 pages, 3 figures, To appear in the proceedings of the IEEE
International Conference on Multimedia and Expo (ICME), Jul 10, 2017 - Jul
14, 2017, Hong Kong, Hong Kon
RADA: Robust Adversarial Data Augmentation for Camera Localization in Challenging Conditions
Camera localization is a fundamental problem for many applications in computer vision, robotics, and autonomy. Despite recent deep learning-based approaches, the lack of robustness in challenging conditions persists due to changes in appearance caused by texture-less planes, repeating structures, reflective surfaces, motion blur, and illumination changes. Data augmentation is an attractive solution, but standard image perturbation methods fail to improve localization robustness. To address this, we propose RADA, which concentrates on perturbing the most vulnerable pixels to generate relatively less image perturbations that perplex the network. Our method outperforms previous augmentation techniques, achieving up to twice the accuracy of state-of-the-art models even under ’unseen’ challenging weather conditions. Videos of our results can be found at https://youtu.be/niOv7- fJeCA. The source code for RADA is publicly available at https://github.com/jialuwang123321/RAD
Self-supervised Interest Point Detection and Description for Fisheye and Perspective Images
Keypoint detection and matching is a fundamental task in many computer vision
problems, from shape reconstruction, to structure from motion, to AR/VR
applications and robotics. It is a well-studied problem with remarkable
successes such as SIFT, and more recent deep learning approaches. While great
robustness is exhibited by these techniques with respect to noise, illumination
variation, and rigid motion transformations, less attention has been placed on
image distortion sensitivity. In this work, we focus on the case when this is
caused by the geometry of the cameras used for image acquisition, and consider
the keypoint detection and matching problem between the hybrid scenario of a
fisheye and a projective image. We build on a state-of-the-art approach and
derive a self-supervised procedure that enables training an interest point
detector and descriptor network. We also collected two new datasets for
additional training and testing in this unexplored scenario, and we demonstrate
that current approaches are suboptimal because they are designed to work in
traditional projective conditions, while the proposed approach turns out to be
the most effective.Comment: CVPR Workshop on Omnidirectional Computer Vision, 202
The Secret Lives of Ebooks: A Paratextual Analysis Illuminates a Veil of Usage Statistics
This study applies the method of paratextual analysis to six electronic books, or ebooks, in an academic library collection at a small liberal arts college. Two books are selected from each of three platforms: ebrary, EBSCO, and SpringerLink. The characteristics of each book are described, including design and readership, as well as 2 years of usage statistics from the specific library, and altmetrics where available. The paratextual study leads to a closer investigation of the usage statistics themselves and concludes that despite industry standards, they are not calculated consistently across vendor platforms and that while these data are invisible to researchers outside of the library, there are also essential elements that librarians mistakenly take at face value when comparing ebook usage from multiple vendors
- …