30 research outputs found

    Implementasi Convolutional Neural Network Untuk Klasifikasi Obyek Pada Citra

    Get PDF
    Deep Learning adalah sebuah bidang keilmuan baru dalam bidang Machine Learning yang akhir-akhir ini berkembang karena perkembangan teknologi GPU acceleration. Deep learning memiliki kemampuan yang sangat baik dalam visi komputer. Salah satunya adalah pada kasus klasifikasi obyek pada citra yang telah lama menjadi problem yang sulit diselesaikan. Tugas akhir ini mengimplementasikan salah satu jenis model Deep Learning yang memiliki kemampuan yang baik dalam klasifikasi data dengan struktur dua dimensi seperti citra, yaitu Convolutional Neural Network (CNN). Digunakan dataset CIFAR-10 yang telah lama menjadi benchmark dalam kasus klasifikasi citra. Model CNN akan dikembangkan menggunakan library Theano yang memiliki kemampuan baik dalam memanfaatkan GPU acceleration. Dalam penyusunan model, dilakukan optimasi hyperparameter jaringan dan analisa penggunaan memori untuk mendapatkan intuisi yang lebih baik terhadap perilaku model CNN. Tugas akhir ini membandingkan tiga arsitektur CNN, yaitu DeepCNet, NagadomiNet, dan NetworkInNetwork, dengan kedalaman convolution layer maksimal lima layer pada setiap arsitektur. viii Dari hasil uji coba, didapatkan nilai error klasifikasi terkecil yaitu 17.69% dengan menggunakan arsitektur NagadomiNet yang terdiri dari convolution layer dengan ukuran kernel 3x3, penggunaan Global Average Pooling sebelum Softmax Layer, serta implementasi inverted dropout dengan nilai drop rate incremental sebesar 0.1, 0.2, 0.3, 0.4, dan 0.5 pada masing-masing convolution layer. ====================================================================================================== Deep Learning is a new branch of knowledge in the field of Machine Learning that in the past few years has developed due to the improvement in GPU Acceleration technologies. Deep Learning has great capabilities in the field of computer vision. One of its capabilities is in the case of object classification in images, a problem that has been unsolved for a long time. This final project will implement a form of Deep Learning that is designed for the processing of two dimensional structured data such as images, which is called The Convolutional Neural Network (CNN). The CIFAR-10 dataset is used because it has been a classic benchmark for the case of image classification. The CNN model is implemented using the pyhton Theano Library because it is designed for use with GPU Acceleration. In the creation of the model, hyperparameter tuning and memory usage analysis will also be done to get a better intuition on the characteristics of CNN. In this final project, three CNN model architectures is compared, which respectively is DeepCNet, NagadomiNet, and Network in Network, each with a maximum convoutional layer depth of 5. x From the experiment, the lowest classification error gotten is 17.69% from a model that uses the NagadomiNet architecture which consists of convolutional layers with a kernel of size 3x3, the usage of Global Average Pooling before the Softmax Layer, and also an implementation of inverted dropout using an incremental drop rate with a value of 0.1, 0.2, 0.3, 0.4, and 0.5 on each convolutional layers

    Deep learning for the early detection of harmful algal blooms and improving water quality monitoring

    Get PDF
    Climate change will affect how water sources are managed and monitored. The frequency of algal blooms will increase with climate change as it presents favourable conditions for the reproduction of phytoplankton. During monitoring, possible sensory failures in monitoring systems result in partially filled data which may affect critical systems. Therefore, imputation becomes necessary to decrease error and increase data quality. This work investigates two issues in water quality data analysis: improving data quality and anomaly detection. It consists of three main topics: data imputation, early algal bloom detection using in-situ data and early algal bloom detection using multiple modalities.The data imputation problem is addressed by experimenting with various methods with a water quality dataset that includes four locations around the North Sea and the Irish Sea with different characteristics and high miss rates, testing model generalisability. A novel neural network architecture with self-attention is proposed in which imputation is done in a single pass, reducing execution time. The self-attention components increase the interpretability of the imputation process at each stage of the network, providing knowledge to domain experts.After data curation, algal activity is predicted using transformer networks, between 1 to 7 days ahead, and the importance of the input with regard to the output of the prediction model is explained using SHAP, aiming to explain model behaviour to domain experts which is overlooked in previous approaches. The prediction model improves bloom detection performance by 5% on average and the explanation summarizes the complex structure of the model to input-output relationships. Performance improvements on the initial unimodal bloom detection model are made by incorporating multiple modalities into the detection process which were only used for validation purposes previously. The problem of missing data is also tackled by using coordinated representations, replacing low quality in-situ data with satellite data and vice versa, instead of imputation which may result in biased results

    Alumni Quarterly, Volume 50 Number 2, May 1961

    Get PDF
    The Alumni Quarterly of Illinois State Normal University.https://ir.library.illinoisstate.edu/aq/1193/thumbnail.jp

    Consideration of selected social theories of aging as evidenced by patterns of adjustment to retirement among professional football players

    Get PDF
    The purpose of this study was to determine whether or not the characteristic patterns of adjustment to retirement among professional football players supported one or more of three current social theories of aging. Data for the study were derived primarily from a structured interview which incorporated questions representative of disengagement theory, identity crisis theory, and activity theory, and questions regarding the individual's professional career in general. Additional data were obtained from three standardized scales given as pencil-paper tests which assessed life satisfaction, morale, and self-esteem. In order to provide an overview of the sample, a written questionnaire was designed to elicit biographical information on each subject. During May and June of 1980, interviews were conducted with five retired professional football players. Rosenberg's Self-esteem Scale (1965), the Life Satisfaction Index B (Neugarten, Havighurst, and Tobin, 1961) and The Revised Philadelphia Geriatric Center Morale Scale (Lawton, 1975) were administered to each subject following the interview session. The biographical questionnaire was given prior to each interview

    Media and Education in the Digital Age

    Get PDF
    This book is an invitation to informed and critical participation in the current debate on the role of digital technology in education and a comprehensive introduction to the most relevant issues in this debate. After an early wave of enthusiasm about the emancipative opportunities of the digital «revolution» in education, recent contributions invite caution, if not scepticism. This collection rejects extreme interpretations and establishes a conceptual framework for the critical questioning of this role in terms of concepts, assessments and subversions. This book offers conceptual tools, ideas and insights for further research. It also provides motivation and information to foster active participation in debates and politics and encourages teachers, parents and learners to take part in the making of the future of our societies

    Defect Detection and Classification in Sewer Pipeline Inspection Videos Using Deep Neural Networks

    Get PDF
    Sewer pipelines as a critical civil infrastructure become a concern for municipalities as they are getting near to the end of their service lives. Meanwhile, new environmental laws and regulations, city expansions, and budget constraints make it harder to maintain these networks. On the other hand, access and inspect sewer pipelines by human-entry based methods are problematic and risky. Current practice for sewer pipeline assessment uses various types of equipment to inspect the condition of pipelines. One of the most used technologies for sewer pipelines inspection is Closed Circuit Television (CCTV). However, application of CCTV method in extensive sewer networks involves certified operators to inspect hours of videos, which is time-consuming, labor-intensive, and error prone. The main objective of this research is to develop a framework for automated defect detection and classification in sewer CCTV inspection videos using computer vision techniques and deep neural networks. This study presents innovative algorithms to deal with the complexity of feature extraction and pattern recognition in sewer inspection videos due to lighting conditions, illumination variations, and unknown patterns of various sewer defects. Therefore, this research includes two main sub-models to first identify and localize anomalies in sewer inspection videos, and in the next phase, detect and classify the defects among the recognized anomalous frames. In the first phase, an innovative approach is proposed for identifying the frames with potential anomalies and localizing them in the pipe segment which is being inspected. The normal and anomalous frames are classified utilizing a one-class support vector machine (OC-SVM). The proposed approach employs 3D Scale Invariant Feature Transform (SIFT) to extract spatio-temporal features and capture scene dynamic statistics in sewer CCTV videos. The OC-SVM is trained by the frame-features which are considered normal, and the outliers to this model are considered abnormal frames. In the next step, the identified anomalous frames are located by recognizing the present text information in them using an end-to-end text recognition approach. The proposed localization approach is performed in two steps, first the text regions are detected using maximally stable extremal regions (MSER) algorithm, then the text characters are recognized using a convolutional neural network (CNN). The performance of the proposed model is tested using videos from real-world sewer inspection reports, where the accuracies of 95% and 86% were achieved for anomaly detection and frame localization, respectively. Identifying the anomalous frames and excluding the normal frames from further analysis could reduce the time and cost of detection. It also ensures the accuracy and quality of assessment by reducing the number of neglected anomalous frames caused by operator error. In the second phase, a defect detection framework is proposed to provide defect detection and classification among the identified anomalous frames. First, a deep Convolutional Neural Network (CNN) which is pre-trained using transfer learning, is used as a feature extractor. In the next step, the remaining convolutional layers of the constructed model are trained by the provided dataset from various types of sewer defects to detect and classify defects in the anomalous frames. The proposed methodology was validated by referencing the ground truth data of a dataset including four defects, and the mAP of 81.3% was achieved. It is expected that the developed model can help sewer inspectors in much faster and more accurate pipeline inspection. The whole framework would decrease the condition assessment time and increase the accuracy of sewer assessment reports

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Footsteps through sacred heart college: surfacing archival heritage through walking and mapping

    Get PDF
    Submitted in part fulfillment of the degree of Master of Arts by Coursework and Research Report University of the Witwatersrand Johannesburg 2017MT 201

    Kelowna Courier

    Get PDF
    corecore