12 research outputs found

    Perbandingan sensor untuk Fault Detection dan Replacement Sensor Temperatur Pada Penyimpanan Sementara Tepung Gandum

    Get PDF
    Penelitian ini bertujuan membahas tentang diagnosis kesalahan pembacaan sensor temperatur tempat penyimpanan gandum, menggunakan metode teknik analisis struktural. Analisis struktural berbasis data digunakan untuk menganalisis kondisi dari sistem. Perbandingan kinerja dan kecepatan pembacaan dari sensor juga dibahas. Metode yang digunakan pada penelitian ini yaitu analisis redudansi dan perbandingan data dari sistem sensor. Data pembacaan sensor utama dibandingkan dengan data yang telah disimpan pada sistem. Ketika data dari sensor memiliki kemiripan dengan data yang berada pada sistem, maka sensor dianggap normal. Akan tetapi jika data tersebut tidak sesuai, data sensor backup akan menjadi pembanding selanjutnya. Apabila data dinyatakan tidak memiliki kemiripan, maka sensor dianggap gagal. Dari beberapa percobaan dihasilkan perbandingan kecepatan respon perubahan temperatur dari sensor. Kecepatan pembacaan perubahan temperatur dari sensor texas instruments (LM35) lebih baik dibandingkan sensor dari maxim integrated (DS18B20). Akan tetapi akurasinya berbanding terbalik. Untuk kecepatan pendeteksian kesalahan dan penggantian sensor, piranti dari texas instruments lebih baik dari maxim integrated. Untuk kecepatan pembacaan kegagalan sensor DS18B20 lebih sensitif terhadap debu / parasit dengan kecepatan 87,8 ms, sedangkan untuk sensor LM35 lebih baik yaitu 77,5 ms. Untuk kondisi penggantian sensor ke sensor backup pada penggunaan sensor LM35 dan DS18B20 memiliki kecepatan yang sama, waktu yang tercepat untuk kedua sensor ini sebesar 14,1 m

    Data-based methods for modeling, control and monitoring of chemical processes

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Evolving artificial neural networks

    Full text link

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Nonlinear Dynamics

    Get PDF
    This volume covers a diverse collection of topics dealing with some of the fundamental concepts and applications embodied in the study of nonlinear dynamics. Each of the 15 chapters contained in this compendium generally fit into one of five topical areas: physics applications, nonlinear oscillators, electrical and mechanical systems, biological and behavioral applications or random processes. The authors of these chapters have contributed a stimulating cross section of new results, which provide a fertile spectrum of ideas that will inspire both seasoned researches and students

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out

    Computer Science & Technology Series : XIX Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC’13 was the nineteenth Congress in the CACIC series. It was organized by the Department of Computer Systems at the CAECE University in Mar del Plata. The Congress included 13 Workshops with 165 accepted papers, 5 Conferences, 3 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. CACIC 2013 was organized following the traditional Congress format, with 13 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of 3-5 chairs of different Universities. The call for papers attracted a total of 247 submissions. An average of 2.5 review reports were collected for each paper, for a grand total of 676 review reports that involved about 210 different reviewers. A total of 165 full papers, involving 489 authors and 80 Universities, were accepted and 25 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI
    corecore