70,586 research outputs found
Distributed computing methodology for training neural networks in an image-guided diagnostic application
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used
Artificial Intelligence Approach to the Determination of Physical Properties of Eclipsing Binaries. I. The EBAI Project
Achieving maximum scientific results from the overwhelming volume of
astronomical data to be acquired over the next few decades will demand novel,
fully automatic methods of data analysis. Artificial intelligence approaches
hold great promise in contributing to this goal. Here we apply neural network
learning technology to the specific domain of eclipsing binary (EB) stars, of
which only some hundreds have been rigorously analyzed, but whose numbers will
reach millions in a decade. Well-analyzed EBs are a prime source of
astrophysical information whose growth rate is at present limited by the need
for human interaction with each EB data-set, principally in determining a
starting solution for subsequent rigorous analysis. We describe the artificial
neural network (ANN) approach which is able to surmount this human bottleneck
and permit EB-based astrophysical information to keep pace with future data
rates. The ANN, following training on a sample of 33,235 model light curves,
outputs a set of approximate model parameters (T2/T1, (R1+R2)/a, e sin(omega),
e cos(omega), and sin i) for each input light curve data-set. The whole sample
is processed in just a few seconds on a single 2GHz CPU. The obtained
parameters can then be readily passed to sophisticated modeling engines. We
also describe a novel method polyfit for pre-processing observational light
curves before inputting their data to the ANN and present the results and
analysis of testing the approach on synthetic data and on real data including
fifty binaries from the Catalog and Atlas of Eclipsing Binaries (CALEB)
database and 2580 light curves from OGLE survey data. [abridged]Comment: 52 pages, accepted to Ap
New acceleration technique for the backpropagation algorithm
Artificial neural networks have been studied for many years in the hope of achieving human like performance in the area of pattern recognition, speech synthesis and higher level of cognitive process. In the connectionist model there are several interconnected processing elements called the neurons that have limited processing capability. Even though the rate of information transmitted between these elements is limited, the complex interconnection and the cooperative interaction between these elements results in a vastly increased computing power; The neural network models are specified by an organized network topology of interconnected neurons. These networks have to be trained in order them to be used for a specific purpose. Backpropagation is one of the popular methods of training the neural networks. There has been a lot of improvement over the speed of convergence of standard backpropagation algorithm in the recent past. Herein we have presented a new technique for accelerating the existing backpropagation without modifying it. We have used the fourth order interpolation method for the dominant eigen values, by using these we change the slope of the activation function. And by doing so we increase the speed of convergence of the backpropagation algorithm; Our experiments have shown significant improvement in the convergence time for problems widely used in benchmarKing Three to ten fold decrease in convergence time is achieved. Convergence time decreases as the complexity of the problem increases. The technique adjusts the energy state of the system so as to escape from local minima
Recommended from our members
Evolutionary optimization within an intelligent hybrid system for design integration
An intelligent hybrid approach has been developed to integrate various stages in total design, including formulation of product design specifications, conceptual design, detail design, and manufacture. The integration is achieved by blending multiple artificial intelligence (AI) techniques and CAD/CAE/CAM into a single environment. It has been applied into power transmission system design. In addition to knowledge-based systems and artificial neural networks, another AI technique, genetic algorithms (GAs), are involved in the approach. The GA is used to conduct optimization tasks: (1) searching the best combination of design parameters to obtain optimum design of gears, and (2) optimization of the architecture of the artificial neural networks used in the hybrid system. In this paper, after a brief overview of the intelligent hybrid system, the GA applications are described in detail
Causative factors of construction and demolition waste generation in Iraq Construction Industry
The construction industry has hurt the environment from the waste generated during
construction activities. Thus, it calls for serious measures to determine the causative
factors of construction waste generated. There are limited studies on factors causing
construction, and demolition (C&D) waste generation, and these limited studies only
focused on the quantification of construction waste. This study took the opportunity to
identify the causative factors for the C&D waste generation and also to determine the
risk level of each causal factor, and the most important minimization methods to
avoiding generating waste. This study was carried out based on the quantitative
approach. A total of 39 factors that causes construction waste generation that has been
identified from the literature review were considered which were then clustered into 4
groups. Improved questionnaire surveys by 38 construction experts (consultants,
contractors and clients) during the pilot study. The actual survey was conducted with
a total of 380 questionnaires, received with a response rate of 83.3%. Data analysis
was performed using SPSS software. Ranking analysis using the mean score approach
found the five most significant causative factors which are poor site management, poor
planning, lack of experience, rework and poor controlling. The result also indicated
that the majority of the identified factors having a high-risk level, in addition, the better
minimization method is environmental awareness. A structural model was developed
based on the 4 groups of causative factors using the Partial Least Squared-Structural
Equation Modelling (PLS-SEM) technique. It was found that the model fits due to the
goodness of fit (GOF ≥ 0.36= 0.658, substantial). Based on the outcome of this study,
39 factors were relevant to the generation of construction and demolition waste in Iraq.
These groups of factors should be avoided during construction works to reduce the
waste generated. The findings of this study are helpful to authorities and stakeholders
in formulating laws and regulations. Furthermore, it provides opportunities for future
researchers to conduct additional research’s on the factors that contribute to
construction waste generation
- …