42,718 research outputs found
An artificial neural network for dimensions and cost modelling of internal micro-channels fabricated in PMMA using Nd:YVO4 laser
For micro-channel fabrication using laser micro-machining processing, estimation techniques are normally utilised to develop an approach for the system behaviour evaluation. Design of Experiments (DOE) and the Artificial Neural Networks (ANN) are two methodologies that can be used as estimation techniques. These techniques help in finding a set of laser processing parameters that provides the required micro-channel dimensions and in finding the optimal solutions in terms reducing the product development time, power consumption and of least cost. In this work, an integrated methodology is presented in which the ANN training experiments were obtained by the statistical software DoE to improve the developed models in ANN. A 33 factorial design of experiments (DoE) was used to get the experimental set. Laser power, P; pulse repetition frequency, PRF; and sample translation speed, U were the ANN inputs. The channel width and the produced micro-channel operating cost per metre were the measured responses. Four Artificial Neural Networks (ANNs) models were developed to be applied to internal micro-channels machined in PMMA using a Nd:YVO4 laser. These models were varied in terms of the selection and the quantity of training data set and constructed using a multi-layered, feed-forward structure with a the back-propagation algorithm. The responses were adequately estimated by the ANN models within the set micro-machining parameters limits. Moreover the effect of changing the selection and the quantity of training data on the approximation capability of the developed ANN model was discussed
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Rapid design of tool-wear condition monitoring systems for turning processes using novelty detection
Condition monitoring systems of manufacturing processes have been recognised in recent years as one of the key technologies that provide the competitive advantage in many manufacturing environments. It is capable of providing an essential means to reduce cost, increase productivity, improve quality and prevent damage to the machine or workpiece. Turning operations are considered one of the most common manufacturing processes in industry. It is used to manufacture different round objects such as shafts, spindles and pins. Despite recent development and intensive engineering research, the development of tool wear monitoring systems in turning is still ongoing challenge. In this paper, force signals are used for monitoring tool-wear in a feature fusion model. A novel approach for the design of condition monitoring systems for turning operations using novelty detection algorithm is presented. The results found prove that the developed system can be used for rapid design of condition monitoring systems for turning operations to predict tool-wear
ANNz: estimating photometric redshifts using artificial neural networks
We introduce ANNz, a freely available software package for photometric
redshift estimation using Artificial Neural Networks. ANNz learns the relation
between photometry and redshift from an appropriate training set of galaxies
for which the redshift is already known. Where a large and representative
training set is available ANNz is a highly competitive tool when compared with
traditional template-fitting methods.
The ANNz package is demonstrated on the Sloan Digital Sky Survey Data Release
1, and for this particular data set the r.m.s. redshift error in the range 0 <
z < 0.7 is 0.023. Non-ideal conditions (spectroscopic sets which are small, or
which are brighter than the photometric set for which redshifts are required)
are simulated and the impact on the photometric redshift accuracy assessed.Comment: 6 pages, 6 figures. Replaced to match version accepted by PASP (minor
changes to original submission). The ANNz package may be obtained from
http://www.ast.cam.ac.uk/~aa
The Challenge of Machine Learning in Space Weather Nowcasting and Forecasting
The numerous recent breakthroughs in machine learning (ML) make imperative to
carefully ponder how the scientific community can benefit from a technology
that, although not necessarily new, is today living its golden age. This Grand
Challenge review paper is focused on the present and future role of machine
learning in space weather. The purpose is twofold. On one hand, we will discuss
previous works that use ML for space weather forecasting, focusing in
particular on the few areas that have seen most activity: the forecasting of
geomagnetic indices, of relativistic electrons at geosynchronous orbits, of
solar flares occurrence, of coronal mass ejection propagation time, and of
solar wind speed. On the other hand, this paper serves as a gentle introduction
to the field of machine learning tailored to the space weather community and as
a pointer to a number of open challenges that we believe the community should
undertake in the next decade. The recurring themes throughout the review are
the need to shift our forecasting paradigm to a probabilistic approach focused
on the reliable assessment of uncertainties, and the combination of
physics-based and machine learning approaches, known as gray-box.Comment: under revie
Evaluation of the effect of ND:YVO4 laser parameters on internal micro-channel fabrication in polycarbonate
This paper presents the development of Artificial Neural Network (ANN) models for the prediction of laser
machined internal micro-channels’ dimensions and production costs. In this work, a pulsed Nd:YVO4 laser
was used for machining micro-channels in polycarbonate material. Six ANN multi-layered, feed-forward,
back-propagation models are presented which were developed on three different training data sets. The
analysed data was obtained from a 33 factorial design of experiments (DoE). The controlled parameters
were laser power, P; pulse repetition frequency, PRF; and sample translation speed; U. Measured responses
were the micro-channel width and the micro-machining operating cost per metre of produced microchannel.
The responses were sufficiently predicted within the set micro-machining parameters limits. Three
carefully selected statistical criteria were used for comparing the performance of the ANN predictive
models. The comparison showed that model which had the largest amount of training data provided the
highest degree of predictability. However, in cases where only a limited amount of ANN training data was
available, then training data taken from a Face Centred Cubic (FCC) model design provided the highest
level of predictability compared with the other examined training data set
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
- …