5 research outputs found

    Segmentasi dan pengesanan objek bergerak dalam keadaan cuaca berjerebu dan berkabus

    Get PDF
    Segmentation and detection of moving object are very important in navigation applications to improve visibility of computer vision technology. The challenges to these issues are how these two issues address hazy and foggy weather. This situation affects technology and specifically the video data used to detect moving objects. This problem occurs due to the light that is scattered because of the fog and haze pixels which prevent light from penetrating resulting in over segmentation. Various methods have been used to improve accuracy and sensitivity in over segmentation but further enhancement is needed to improve the performance in the detection of moving objects. In this research, a new method is proposed to overcome over segmentation which is a combination between Gaussian Mixture Model and other filters based on their own specialities. The combined filters comprised Median Filter and Average Filter for over segmentation, Morphology Filter and Gaussian Filter to rebuild structure element of pixel object, and combination of Blob Analysis, Bounding Box and Kalman Filter to reduce False Positive detection. The combination of these filters is known as Object of Interest Movement (OIM). Qualitative and quantitative methods were used to make comparison with previous methods. Data comprised sources of haze recordings obtained from YouTube and open dataset from Karlsure. Comparative analysis of pictures and calculations of detection of objects were done. Result showed that the combined filters is capable of improving accuracy and sensitivity of the segmentation and detection which were 72.24% for foggy videos, and 76.73% in hazy weather. Based on the findings, the OIM method has proven its capability to improve the accuracy of segmentation and detection object without the need for enhancement to contrast an image

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
    corecore