1,997 research outputs found

    ANALYSIS OF REAL-TIME OBJECT DETECTION METHODS FOR ANDROID SMARTPHONE

    Get PDF
    This paper presents the analysis of real-time object detection method for embedded system, especially the Android smartphone. As we all know, object detection algorithm is a complicated algorithm that consumes high performance hardware to execute the algorithm in real time. However due to the development of embedded hardware and object detection algorithm, current embedded device may be able to execute the object detection algorithm in real-time. In this study, we analyze the best object detection algorithm with respect to efficiency, quality and robustness of the object detection. A lot of object detection algorithms have been compared such as Scale Invariant Feature Transform (SIFT), Speeded-Up Feature Transform (SuRF), Center Surrounded Extrema (CenSurE), Good Features To Track (GFTT), Maximally- Stable Extremal Region Extractor (MSER), Oriented Binary Robust Independent Elementary Features (ORB), and Features from Accelerated Segment Test (FAST) on the GalaxyS Android smartphone. The results show that FAST algorithm has the best combination of speed and object detection performance

    Stereo Visual Odometry for Indoor Localization of Ship Model

    Get PDF
    Typically, ships are designed for open sea navigation and thus research of autonomous ships is mostly done for that particular area. This paper explores the possibility of using low-cost sensors for localization inside the small navigation area. The localization system is based on the technology used for developing autonomous cars. The main part of the system is visual odometry using stereo cameras fused with Inertial Measurement Unit (IMU) data coupled with Kalman and particle filters to get decimetre level accuracy inside a basin for different surface conditions. The visual odometry uses cropped frames for stereo cameras and Good features to track algorithm for extracting features to get depths for each feature that is used for estimation of ship model movement. Experimental results showed that the proposed system could localize itself within a decimetre accuracy implying that there is a real possibility for ships in using visual odometry for autonomous navigation on narrow waterways, which can have a significant impact on future transportation

    Comparison of Feature Extractors for Real-Time Object Detection on Android Smartphone

    Get PDF
    This paper presents the analysis of real-time object detection method for embedded system particularly the Android smartphone. As we all know, object detection algorithm is a complicated algorithm that consumes high performance hardware to execute the algorithm in real time. However due to the development of embedded hardware and object detection algorithm, current embedded device may be able to execute the object detection algorithm in real-time. In this study, we analyze the best object detection algorithm with respect to efficiency, quality and robustness of the algorithm. Several object detection algorithms have been compared such as Scale Invariant Feature Transform (SIFT), Speeded-Up Feature Transform (SuRF), Center Surrounded External (CenSurE), Good Features To Track (GFTT), Maximally-Stable External Region Extractor (MSER), Oriented Binary Robust Independent Elementary Features (ORB), and Features from Accelerated Segment Test (FAST) on the GalaxyS Android smartphone. The results show that FAST algorithm has the best combination of speed and object detection performance

    Cloud tracking with optical flow for short-term solar forecasting

    Get PDF
    A method for tracking and predicting cloud movement using ground based sky imagery is presented. Sequences of partial sky images, with each image taken one second apart with a size of 640 by 480 pixels, were processed to determine the time taken for clouds to reach a user defined region in the image or the Sun. The clouds were first identified by segmenting the image based on the difference between the blue and red colour channels, producing a binary detection image. Good features to track were then located in the image and tracked utilising the Lucas-Kanade method for optical flow. From the trajectory of the tracked features and the binary detection image, cloud signals were generated. The trajectory of the individual features were used to determine the risky cloud signals (signals that pass over the user defined region or Sun). Time to collision estimates were produced based on merging these risky cloud signals. Estimates of times up to 40 seconds were achieved with error in the estimate increasing when the estimated time is larger. The method presented has the potential for tracking clouds travelling in different directions and at different velocities

    Pointcatcher software:analysis of glacial time-lapse photography and integration with multi-temporal digital elevation models

    Get PDF
    Terrestrial time-lapse photography offers insight into glacial processes through high spatial and temporal resolution imagery. However, oblique camera views complicate measurement in geographic coordinates, and lead to reliance on specific imaging geometries or simplifying assumptions for calculating parameters such as ice velocity. We develop a novel approach that integrates time-lapse imagery with multi-temporal digital elevation models to derive full 3D coordinates for natural features tracked throughout a monoscopic image sequence. This enables daily independent measurement of horizontal (ice flow) and vertical (ice melt) velocities. By combining two terrestrial laser scanner surveys with a 73-day sequence from SĆ³lheimajƶkull, Iceland, variations in horizontal ice velocity of ~10% were identified over timescales of ~25 days. An overall surface elevation decrease of ~3.0 m showed rate changes asynchronous with the horizontal velocity variations, demonstrating a temporal disconnect between the processes of ice surface lowering and mechanisms of glacier movement. Our software, ā€˜Pointcatcherā€™, is freely available for user-friendly interactive processing of general time-lapse sequences and includes Monte Carlo error analysis and uncertainty projection onto DEM surfaces. It is particularly suited for analysis of challenging oblique glacial imagery, and we discuss good features to track, both for correction of camera motion and for deriving ice velocities

    Learning Articulated Motions From Visual Demonstration

    Full text link
    Many functional elements of human homes and workplaces consist of rigid components which are connected through one or more sliding or rotating linkages. Examples include doors and drawers of cabinets and appliances; laptops; and swivel office chairs. A robotic mobile manipulator would benefit from the ability to acquire kinematic models of such objects from observation. This paper describes a method by which a robot can acquire an object model by capturing depth imagery of the object as a human moves it through its range of motion. We envision that in future, a machine newly introduced to an environment could be shown by its human user the articulated objects particular to that environment, inferring from these "visual demonstrations" enough information to actuate each object independently of the user. Our method employs sparse (markerless) feature tracking, motion segmentation, component pose estimation, and articulation learning; it does not require prior object models. Using the method, a robot can observe an object being exercised, infer a kinematic model incorporating rigid, prismatic and revolute joints, then use the model to predict the object's motion from a novel vantage point. We evaluate the method's performance, and compare it to that of a previously published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN: 978-0-9923747-0-

    Learning Rank Reduced Interpolation with Principal Component Analysis

    Full text link
    In computer vision most iterative optimization algorithms, both sparse and dense, rely on a coarse and reliable dense initialization to bootstrap their optimization procedure. For example, dense optical flow algorithms profit massively in speed and robustness if they are initialized well in the basin of convergence of the used loss function. The same holds true for methods as sparse feature tracking when initial flow or depth information for new features at arbitrary positions is needed. This makes it extremely important to have techniques at hand that allow to obtain from only very few available measurements a dense but still approximative sketch of a desired 2D structure (e.g. depth maps, optical flow, disparity maps, etc.). The 2D map is regarded as sample from a 2D random process. The method presented here exploits the complete information given by the principal component analysis (PCA) of that process, the principal basis and its prior distribution. The method is able to determine a dense reconstruction from sparse measurement. When facing situations with only very sparse measurements, typically the number of principal components is further reduced which results in a loss of expressiveness of the basis. We overcome this problem and inject prior knowledge in a maximum a posterior (MAP) approach. We test our approach on the KITTI and the virtual KITTI datasets and focus on the interpolation of depth maps for driving scenes. The evaluation of the results show good agreement to the ground truth and are clearly better than results of interpolation by the nearest neighbor method which disregards statistical information.Comment: Accepted at Intelligent Vehicles Symposium (IV), Los Angeles, USA, June 201
    • ā€¦
    corecore