2 research outputs found

    Ekstraksi Fitur Mirip Haar Untuk Pendeteksian Obyek Pada Real-time Video Menggunakan Opencv Dan Implementasinya Pada Sistem Tertanam Beagleboard Xm

    Get PDF
    Robot memerlukan aksi yang tepat untuk menanggapi suatu keadaan lingkungan. Kondisi lingkungan dapat diperoleh dari sensor-sensor yang terhubung dengan suatu unit pemroses, salah satu sensor yang dapat digunakan adalah sensor visual. Aksi terhadap obyek yang hanya dapat dikenali secara visual sangat bergantung pada kemampuan robot dalam mengolah informasi warna atau bentuk obyek. Dalam penelitian ini digunakan pendeteksian obyek berdasarkan ekstraksi fitur yang banyak diterapkan pada pengenalan wajah otomatis. Fitur-fitur ini kemudian diolah menggunakan metode pembelajaran terarah (supervised learning) AdaBoost untuk melakukan pembelajaran terhadap suatu obyek tertentu. Tahap pembelajaran dilakukan menggunakan rasio sampel positif dan negatif sebesar 5:3 untuk membuat tabel look-up. Tiga set tabel training dibuat dengan jumlah sampel dan stage training yang berbeda. Tiga set tabel yang dibu-at telah diuji menggunakan 2 buah berkas video suatu lingkungan yang diambil dari dua sudut pandang yang berbeda. Telah didapatkan dalam penelitian ini rasio pendeteksian pada kedua video memberikan kecenderungan yang sama. Suatu aturan yang sederhana ditunjukkan dalam penelitian ini untuk meningkatkan akurasi tabel look-up. Pustaka OpenCV digunakan dalam penelitian ini karena kemampuannya yang dapat ditanamkan pada bermacam-macam perangkat keras, termasuk piranti-piranti bergerak dan sistem tertanam. Kecepatan pendeteksian obyek menggunakan sistem tertanam Beagleboard XM juga dijelaskan dalam penelitian ini

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio
    corecore