3,617 research outputs found

    Design and development of a vision based leather trimming machine

    Get PDF
    The objective of the work described in this paper is to demonstrate a laboratory prototype for trimming the external part of a hide, assuming that the resulting machine would eventually form part of a completely automatic system in which hides are uploaded, inspected and parts for assembly are downloaded without manual intervention and prior sorting. Detailed literature and international standards are included. The expected advantages of integrating all vision based functions in a single machine, whose basic architecture is proposed in the paper, are also discussed. The developed system is based on a monochrome camera following the leather contour. This work focuses on the image processing algorithms for defect detection on leather and the NC programming issues related to the path following optimization, which have been successfully tested with different leather types

    Cost-effective HPC clustering for computer vision applications

    Get PDF
    We will present a cost-effective and flexible realization of high performance computing (HPC) clustering and its potential in solving computationally intensive problems in computer vision. The featured software foundation to support the parallel programming is the GNU parallel Knoppix package with message passing interface (MPI) based Octave, Python and C interface capabilities. The implementation is especially of interest in applications where the main objective is to reuse the existing hardware infrastructure and to maintain the overall budget cost. We will present the benchmark results and compare and contrast the performances of Octave and MATLAB

    Motion analysis report

    Get PDF
    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations

    A Gesture-based Recognition System for Augmented Reality

    Get PDF
    With the geometrical improvement in Information Technology, current conventional input devices are becoming increasingly obsolete and lacking. Experts in Human Computer Interaction (HCI) are convinced that input devices remain the bottleneck of information acquisition specifically in when using Augmented Reality (AR) technology. Current input mechanisms are unable to compete with this trend towards naturalness and expressivity which allows users to perform natural gestures or operations and convert them as input. Hence, a more natural and intuitive input device is imperative, specifically gestural inputs that have been widely perceived by HCI experts as the next big input device. To address this gap, this project is set to develop a prototype of hand gesture recognition system based on computer vision in modeling basic human-computer interactions. The main motivation in this work is a technology that requires no outfitting of additional equipment whatsoever by the users. The gesture-based had recognition system was implemented using the Rapid Application Development (RAD) methodology and was evaluated in terms of its usability and performance through five levels of testing, which are unit testing, integration testing, system testing, recognition accuracy testing, and user acceptance testing. The test results of unit, integration, system testing as well as user acceptance testing produced favorable results. In conclusion, current conventional input devices will continue to bottleneck this advancement in technology; therefore, a better alternative input technique should be looked into, in particularly, gesture-based input technique which offers user a more natural and intuitive control

    Retinal Area Segmentation using Adaptive Superpixalation and its Classification using RBFN

    Get PDF
    Retinal disease is the very important issue in medical field. To diagnose the disease, it needs to detect the true retinal area. Artefacts like eyelids and eyelashes are come along with retinal part so removal of artefacts is the big task for better diagnosis of disease into the retinal part.  In this paper, we have proposed the segmentation and use machine learning approaches to detect the true retinal part. Preprocessing is done on the original image using Gamma Normalization which helps to enhance the image  that can gives detail information about the image. Then the segmentation is performed on the Gamma Normalized image by Superpixel method. Superpixel is the group of pixel into different regions which is based on compactness and regional size. Superpixel is used to reduce the complexity of image processing task and provide suitable primitive image pattern. Then feature generation must be done and machine learning approach helps to extract true retinal area. The experimental evaluation gives the better result with accuracy of 96%
    corecore