236 research outputs found
Optimization of FPGA-based CNN Accelerators Using Metaheuristics
In recent years, convolutional neural networks (CNNs) have demonstrated their
ability to solve problems in many fields and with accuracy that was not
possible before. However, this comes with extensive computational requirements,
which made general CPUs unable to deliver the desired real-time performance. At
the same time, FPGAs have seen a surge in interest for accelerating CNN
inference. This is due to their ability to create custom designs with different
levels of parallelism. Furthermore, FPGAs provide better performance per watt
compared to GPUs. The current trend in FPGA-based CNN accelerators is to
implement multiple convolutional layer processors (CLPs), each of which is
tailored for a subset of layers. However, the growing complexity of CNN
architectures makes optimizing the resources available on the target FPGA
device to deliver optimal performance more challenging. In this paper, we
present a CNN accelerator and an accompanying automated design methodology that
employs metaheuristics for partitioning available FPGA resources to design a
Multi-CLP accelerator. Specifically, the proposed design tool adopts simulated
annealing (SA) and tabu search (TS) algorithms to find the number of CLPs
required and their respective configurations to achieve optimal performance on
a given target FPGA device. Here, the focus is on the key specifications and
hardware resources, including digital signal processors, block RAMs, and
off-chip memory bandwidth. Experimental results and comparisons using four
well-known benchmark CNNs are presented demonstrating that the proposed
acceleration framework is both encouraging and promising. The SA-/TS-based
Multi-CLP achieves 1.31x - 2.37x higher throughput than the state-of-the-art
Single-/Multi-CLP approaches in accelerating AlexNet, SqueezeNet 1.1, VGGNet,
and GoogLeNet architectures on the Xilinx VC707 and VC709 FPGA boards.Comment: 23 pages, 7 figures, 9 tables. in The Journal of Supercomputing, 202
An empirical approach for currency identification
Currency identification is the application of systematic methods to determine authenticity of questioned currency. However, identification analysis is a difficult task requiring specially trained examiners, the most important challenge is automating the analysis process reducing human labor and time. In this study, an empirical approach for automated currency identification is formulated and a prototype is developed. A two parts feature vector is defined comprised of color features and texture features. Finally the banknote in question is classified by a Feedforward Neural Network (FNN) and a measurement of the similarity between existing samples and suspect banknote is output
Hybrid classification approach for imbalanced datasets
The research area of imbalanced dataset has been attracted increasing attention from both academic and industrial areas, because it poses a serious issues for so many supervised learning problems. Since the number of majority class dominates the number of minority class are from minority class, if training dataset includes all data in order to fit a classic classifier, the classifier tends to classify all data to majority class by ignoring minority data as noise. Thus, it is very significant to select appropriate training dataset in the prepossessing stage for classification of imbalanced dataset. We propose an combination approach of SMOTE (Synthetic Minority Over-sampling Technique) and instance selection approaches. The numeric results show that the proposed combination approach can help classifiers to achieve better performance
Intelligent Generation of Graphical Game Assets: A Conceptual Framework and Systematic Review of the State of the Art
Procedural content generation (PCG) can be applied to a wide variety of tasks
in games, from narratives, levels and sounds, to trees and weapons. A large
amount of game content is comprised of graphical assets, such as clouds,
buildings or vegetation, that do not require gameplay function considerations.
There is also a breadth of literature examining the procedural generation of
such elements for purposes outside of games. The body of research, focused on
specific methods for generating specific assets, provides a narrow view of the
available possibilities. Hence, it is difficult to have a clear picture of all
approaches and possibilities, with no guide for interested parties to discover
possible methods and approaches for their needs, and no facility to guide them
through each technique or approach to map out the process of using them.
Therefore, a systematic literature review has been conducted, yielding 200
accepted papers. This paper explores state-of-the-art approaches to graphical
asset generation, examining research from a wide range of applications, inside
and outside of games. Informed by the literature, a conceptual framework has
been derived to address the aforementioned gaps
Seabed classification using physics-based modeling and machine learning
In this work model-based methods are employed along with machine learning
techniques to classify sediments in oceanic environments based on the
geoacoustic properties of a two-layer seabed. Two different scenarios are
investigated. First, a simple low-frequency case is set up, where the acoustic
field is modeled with normal modes. Four different hypotheses are made for
seafloor sediment possibilities and these are explored using both various
machine learning techniques and a simple matched-field approach. For most noise
levels, the latter has an inferior performance to the machine learning methods.
Second, the high-frequency model of the scattering from a rough, two-layer
seafloor is considered. Again, four different sediment possibilities are
classified with machine learning. For higher accuracy, 1D Convolutional Neural
Networks (CNNs) are employed. In both cases we see that the machine learning
methods, both in simple and more complex formulations, lead to effective
sediment characterization. Our results assess the robustness to noise and model
misspecification of different classifiers
Genetic-algorithm-optimized neural networks for gravitational wave classification
Gravitational-wave detection strategies are based on a signal analysis
technique known as matched filtering. Despite the success of matched filtering,
due to its computational cost, there has been recent interest in developing
deep convolutional neural networks (CNNs) for signal detection. Designing these
networks remains a challenge as most procedures adopt a trial and error
strategy to set the hyperparameter values. We propose a new method for
hyperparameter optimization based on genetic algorithms (GAs). We compare six
different GA variants and explore different choices for the GA-optimized
fitness score. We show that the GA can discover high-quality architectures when
the initial hyperparameter seed values are far from a good solution as well as
refining already good networks. For example, when starting from the
architecture proposed by George and Huerta, the network optimized over the
20-dimensional hyperparameter space has 78% fewer trainable parameters while
obtaining an 11% increase in accuracy for our test problem. Using genetic
algorithm optimization to refine an existing network should be especially
useful if the problem context (e.g. statistical properties of the noise, signal
model, etc) changes and one needs to rebuild a network. In all of our
experiments, we find the GA discovers significantly less complicated networks
as compared to the seed network, suggesting it can be used to prune wasteful
network structures. While we have restricted our attention to CNN classifiers,
our GA hyperparameter optimization strategy can be applied within other machine
learning settings.Comment: 25 pages, 8 figures, and 2 tables; Version 2 includes an expanded
discussion of our hyperparameter optimization mode
Rancang bangun decision support system untuk clustering tingkat kerusakan bangunan pasca bencana alam menggunakan deep learning
Decision Support System (DSS) merupakan salah satu cabang keilmuan dari sistem informasi yang memiliki suatu intelligence. Menerapkan DSS untuk memecahkan suatu masalah merupakan satu bentuk riset yang banyak peneliti lakukan. Metode yang banyak di terapkan oleh para peneliti adalah Multi-Criteria Decision Making (MCDM), salah satu metode MCDM yaitu Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). Salah satu kelemahan MCDM yaitu user harus melewati setiap langkah dari metode MCDM. Dengan adanya kelemahan tersebut maka peneliti melakukan kolaborasi dengan menerapkan Machine Learning (ML) pada DSS, tujuannya adalah agar DSS lebih cerdas karena user tidak perlu melakukan tahapan-tahapan DSS dalam memecahkan masalah. Pada penelitian kami menggunakan obyek untuk menentukan tingkat kerusakan sektor pasca bencana alam menggunakan Deep Learning (DL). Sebelum menerapkan metode DL yaitu Convutional Neural Network (CNN) untuk menentukan tingkat kerusakan sektor pasca bencana alam adalah melakukan pre-processing data. terdapat beberapa langkah dari pre-processing data diantaranya labeling data, dan augmentasi data. Dengan menggunakan data hasil dari DSS untuk mencari labeling data pada setiap data kerusakan sektor pasca bencana alam menggunakan Principal Component Analysis (PCA) agar pada saat melabelkan tingkat kerusakan sektor pasca bencana memiliki acuan secara ilmiah. Setelah mendapatkan labeling data tingkat kerusakan sektor pasca bencana alam menggunakan PCA kemudian menggunakan hasil reduksi parameter dari teknik PCA tersebut untuk acuan augmentasi gambar agar gambar dapat terbentuk sesuai dengan parameter yang digunakan. Kemudian hasil dari augmentasi gambar tersebut akan masuk proses watershed algoritm untuk mengetahui tingkat kerusakan sektor pasca bencana alam
- …