30 research outputs found

    Comparing Computing Platforms for Deep Learning on a Humanoid Robot

    Full text link
    The goal of this study is to test two different computing platforms with respect to their suitability for running deep networks as part of a humanoid robot software system. One of the platforms is the CPU-centered Intel NUC7i7BNH and the other is a NVIDIA Jetson TX2 system that puts more emphasis on GPU processing. The experiments addressed a number of benchmarking tasks including pedestrian detection using deep neural networks. Some of the results were unexpected but demonstrate that platforms exhibit both advantages and disadvantages when taking computational performance and electrical power requirements of such a system into account.Comment: 12 pages, 5 figure

    PREDIKSI PENYAKIT STROKE MENGGUNAKAN SUPPORT VECTOR MACHINE (SVM)

    Get PDF
    Berdasarkan data dari Kementerian Kesehatan Indonesia, telah terjadi peningkatan jumlah pada kasuspenyakit stroke sebesar 3.9% mulai dari tahun 2013 sampai dengan tahun 2018. Secara nasional, jumlahkasus stroke sering terjadi pada kelompok yang memiliki rentang umur antara 55-64 tahun dan palingsedikit terjadi pada rentang umur 15-24. Stroke atau (Cerebrovascular Accidents) merupakan sebuahkeadaan dimana aliran darah ke otak mengalami gangguan mendadak atau berkurang. Hal tersebutdapat disebabkan oleh penyumbatan atau pecah pembuluh darah, sehingga sel-sel pada area otak tidakmendapatkan pasokan darah yang nutrisi dan oksigen. Diperlukan deteksi dini yang bertujuan untukmengurangi jumlah potensi kematian akibat stroke. Prediksi stroke masih menjadi tantang dalam bidangkedokteran, salah satu penyebabnya adalah volume data pada data medis yang memiliki heterogenitasdan kompleksitas yang tinggi. Teknik machine learning merupakan model analisis data yang dapatdigunakan untuk memprediksi penyakit stroke. Berbagai model pembelajaran machine learning telahdiusulkan oleh peneliti-peneliti sebelumnya, salah satunya Support Vector Machine. Penelitian inimencoba menerapkan kembali algoritma SVM dengan mendapatkan hasil kinerja lebih baik daripenelitian sebelumnya. Dalam penelitian ini didapatkan nilai accuracy sebesar 100% dan nilai ROC-AUC sebesar 100%. Perlu dilakukan pengkajian lagi terkait hasil yang didapatkan hingga mencapai100%

    Super-resolution of synthetic aperture radar complex data by deep-learning

    Get PDF
    One of the greatest limitations of Synthetic Aperture Radar imagery is the capability to obtain an arbitrarily high spatial resolution. Indeed, despite optical sensors, this capability is not just limited by the sensor technology. Instead, improving the SAR spatial resolution requires large transmitted bandwidth and relatively long synthetic apertures that for regulatory and practical reasons are impossible to be met. This issue gets particularly relevant when dealing with Stripmap mode acquisitions and with relatively low carrier frequency sensors (where relatively large bandwidth signals are more difficult to be transmitted). To overcome this limitation, in this paper a deep learning based framework is proposed to enhance the SAR image spatial resolution while retaining the complex image accuracy. Results on simuated and real SAR data demonstrate the effectiveness of the proposed framework

    Early Classifying Multimodal Sequences

    Full text link
    Often pieces of information are received sequentially over time. When did one collect enough such pieces to classify? Trading wait time for decision certainty leads to early classification problems that have recently gained attention as a means of adapting classification to more dynamic environments. However, so far results have been limited to unimodal sequences. In this pilot study, we expand into early classifying multimodal sequences by combining existing methods. We show our new method yields experimental AUC advantages of up to 8.7%.Comment: 7 pages, 5 figure

    Spoken Language Identification System for English-Mandarin Code-Switching Child-Directed Speech

    Full text link
    This work focuses on improving the Spoken Language Identification (LangId) system for a challenge that focuses on developing robust language identification systems that are reliable for non-standard, accented (Singaporean accent), spontaneous code-switched, and child-directed speech collected via Zoom. We propose a two-stage Encoder-Decoder-based E2E model. The encoder module consists of 1D depth-wise separable convolutions with Squeeze-and-Excitation (SE) layers with a global context. The decoder module uses an attentive temporal pooling mechanism to get fixed length time-independent feature representation. The total number of parameters in the model is around 22.1 M, which is relatively light compared to using some large-scale pre-trained speech models. We achieved an EER of 15.6% in the closed track and 11.1% in the open track (baseline system 22.1%). We also curated additional LangId data from YouTube videos (having Singaporean speakers), which will be released for public use.Comment: Accepted by Interspeech 2023, 5 pages, 1 figure, 4 table

    Intent recognition in smart living through deep recurrent neural networks

    Get PDF
    Electroencephalography (EEG) signal based intent recognition has recently attracted much attention in both academia and industries, due to helping the elderly or motor-disabled people controlling smart devices to communicate with outer world. However, the utilization of EEG signals is challenged by low accuracy, arduous and time- consuming feature extraction. This paper proposes a 7-layer deep learning model to classify raw EEG signals with the aim of recognizing subjects' intents, to avoid the time consumed in pre-processing and feature extraction. The hyper-parameters are selected by an Orthogonal Array experiment method for efficiency. Our model is applied to an open EEG dataset provided by PhysioNet and achieves the accuracy of 0.9553 on the intent recognition. The applicability of our proposed model is further demonstrated by two use cases of smart living (assisted living with robotics and home automation).Comment: 10 pages, 5 figures,5 tables, 21 conference
    corecore