38,189 research outputs found

    Parallel growing and training of neural networks using output parallelism

    Get PDF
    In order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of sub-problems. In this paper, we propose a simple neural network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one sub-problem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem’s output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems

    Astrophysical Data Analytics based on Neural Gas Models, using the Classification of Globular Clusters as Playground

    Get PDF
    In Astrophysics, the identification of candidate Globular Clusters through deep, wide-field, single band HST images, is a typical data analytics problem, where methods based on Machine Learning have revealed a high efficiency and reliability, demonstrating the capability to improve the traditional approaches. Here we experimented some variants of the known Neural Gas model, exploring both supervised and unsupervised paradigms of Machine Learning, on the classification of Globular Clusters, extracted from the NGC1399 HST data. Main focus of this work was to use a well-tested playground to scientifically validate such kind of models for further extended experiments in astrophysics and using other standard Machine Learning methods (for instance Random Forest and Multi Layer Perceptron neural network) for a comparison of performances in terms of purity and completeness.Comment: Proceedings of the XIX International Conference "Data Analytics and Management in Data Intensive Domains" (DAMDID/RCDL 2017), Moscow, Russia, October 10-13, 2017, 8 pages, 4 figure

    New acceleration technique for the backpropagation algorithm

    Full text link
    Artificial neural networks have been studied for many years in the hope of achieving human like performance in the area of pattern recognition, speech synthesis and higher level of cognitive process. In the connectionist model there are several interconnected processing elements called the neurons that have limited processing capability. Even though the rate of information transmitted between these elements is limited, the complex interconnection and the cooperative interaction between these elements results in a vastly increased computing power; The neural network models are specified by an organized network topology of interconnected neurons. These networks have to be trained in order them to be used for a specific purpose. Backpropagation is one of the popular methods of training the neural networks. There has been a lot of improvement over the speed of convergence of standard backpropagation algorithm in the recent past. Herein we have presented a new technique for accelerating the existing backpropagation without modifying it. We have used the fourth order interpolation method for the dominant eigen values, by using these we change the slope of the activation function. And by doing so we increase the speed of convergence of the backpropagation algorithm; Our experiments have shown significant improvement in the convergence time for problems widely used in benchmarKing Three to ten fold decrease in convergence time is achieved. Convergence time decreases as the complexity of the problem increases. The technique adjusts the energy state of the system so as to escape from local minima

    Ianus: an Adpative FPGA Computer

    Full text link
    Dedicated machines designed for specific computational algorithms can outperform conventional computers by several orders of magnitude. In this note we describe {\it Ianus}, a new generation FPGA based machine and its basic features: hardware integration and wide reprogrammability. Our goal is to build a machine that can fully exploit the performance potential of new generation FPGA devices. We also plan a software platform which simplifies its programming, in order to extend its intended range of application to a wide class of interesting and computationally demanding problems. The decision to develop a dedicated processor is a complex one, involving careful assessment of its performance lead, during its expected lifetime, over traditional computers, taking into account their performance increase, as predicted by Moore's law. We discuss this point in detail

    Computational Contributions to the Automation of Agriculture

    Get PDF
    The purpose of this paper is to explore ways that computational advancements have enabled the complete automation of agriculture from start to finish. With a major need for agricultural advancements because of food and water shortages, some farmers have begun creating their own solutions to these problems. Primarily explored in this paper, however, are current research topics in the automation of agriculture. Digital agriculture is surveyed, focusing on ways that data collection can be beneficial. Additionally, self-driving technology is explored with emphasis on farming applications. Machine vision technology is also detailed, with specific application to weed management and harvesting of crops. Finally, the effects of automating agriculture are briefly considered, including labor, the environment, and direct effects on farmers
    corecore