143 research outputs found
Recommended from our members
Alloy Design for a Fusion Power Plant
Fusion power is generated when hot deuterium and tritium nuclei react, producing alpha particles and 14 MeV neutrons. These neutrons escape the reaction plasma and are absorbed by the surrounding material structure of the plant, transferring the heat of the reaction to an external cooling circuit. In such high-energy neutron irradiation environments, extensive atomic displacement damage and transmutation production of helium affect the mechanical properties of materials.
Among these effects are irradiation hardening, embrittlement, and macroscopic swelling due to the formation of voids within the material. To aid understanding of these effects, Bayesian neural networks were used to model irradiation hardening and embrittlement of a set of candidate alloys, reduced-activation ferritic-martensitic steels. The models have been compared to other methods, and it is demonstrated that a neural network approach to modelling the properties of irradiated steels provides a useful tool in the future engineering of fusion materials, and for the first time, predictions are made on irradiated property changes based on the full range of available experimental parameters rather than a simplified model. In addition, the models are used to calculate optimised compositions for potential fusion alloys. Recommendations on the most fruitful ways of designing future experiments have also been made.
In addition, a classical nucleation theory approach was taken to modelling the incubation and nucleation of irradiation-induced voids in these steels, with a view to minimising this undesirable phenomenon in candidate materials.
Using these models, recommendations are made with regards to the engineering of future reduced-activation steels for fusion applications, and further research opportunities presented by the work are reviewed
Recommended from our members
Mechanical properties of materials for fusion power plants
Fusion power is the production of electricity from a hot plasma of deuterium
and tritium, reacting to produce particles and 14 MeV neutrons, which are
collected by a cooling system. Their kinetic energy is transformed into heat
and electricity via steam turbines. The constant
ux of neutrons on the rst
wall of the reactor produces atomic displacement damage through collisions
with nuclei, and gas bubbles as a result of transmutation reactions. This
leads eventually to hardening and embrittlement. Designing a material able
to withstand such intensity of damage is one of the main aim of research in
the eld of controlled fusion.
In the past decades, many experiments have been carried out to under-
stand the formation of radiation{induced damage and quantify the changes
in mechanical properties of irradiated steels, but the lack of facilities prevents
us from testing candidate materials in a fusion{like environment. Modelling
techniques are utilised here to extract information and principles which can
help estimate changes in steels due to damage.
The elongation and yield strength of various low{activation ferritic/
martensitic steels were modelled by neural networks and Gaussian processes.
These models were used to make predictions which were compared to exper-
imental values. Combined with other techniques and thermodynamic tools,
it was possible to understand the evolution of the mechanical properties of
irradiated steel, with a particular focus on the role of chromium and the
roles of irradiation temperature and irradiation dose. They were also used
to extrapolate data related to ssion and attempt to make predictions in
fusion conditions.
A set of general recommendations concerning the database used to train
the neural networks were made and the usage of such a modelling technique
in materials science is discussed.
An attempt to optimise the performance of neural networks by suppress-
ing some random aspects of the training is presented. Models of the elon-
gation, yield strength and ductile-to-brittle transition temperature trained
following this procedure were created and compared to classical models
A Machine Learning Framework for Optimising File Distribution Across Multiple Cloud Storage Services
Storing data using a single cloud storage service may lead to several potential problems
for the data owner. Such issues include service continuity, availability, performance,
security, and the risk of vendor lock-in. A promising solution is to distribute the data
across multiple cloud storage services , similarly to the manner in which data are distributed
across multiple physical disk drives to achieve fault tolerance and to improve
performance . However, the distinguishing characteristics of different cloud providers,
in term of pricing schemes and service performance, make optimising the cost and performance
across many cloud storage services at once a challenge. This research proposes
a framework for automatically tuning the data distribution policies across multiple cloud
storage services from the client side, based on file access patterns. The aim of this work
is to explore the optimisation of both the average cost per gigabyte and the average service
performance (mainly latency time) on multiple cloud storage services . To achieve
these aims, two machine learning algorithms were used:
1. supervised learning to predict file access patterns.
2. reinforcement learning to learn the ideal file distribution parameters.
File distribution over several cloud storage services . The framework was tested in a
cloud storage services emulator, which emulated a real multiple-cloud storage services
setting (such as Google Cloud Storage, Amazon S3, Microsoft Azure Storage, and Rack-
Space file cloud) in terms of service performance and cost. In addition, the framework
was tested in various settings of several cloud storage services. The results of testing
the framework showed that the multiple cloud approach achieved an improvement of
about 42% for cost and 76% for performance. These findings indicate that storing data
in multiple clouds is a superior approach, compared with the commonly used uniform
file distribution and compared with a heuristic distribution method
Synchronous Relaying Of Sensor Data
In this paper we have put forth a novel methodology to relay data obtained by
inbuilt sensors of smart phones in real time to remote database followed by
fetching of this data . Smart phones are becoming very common and they are
laced with a number of sensors that can not only be used in native applications
but can also be sent to external nodes to be used by third parties for
application and service development
Truck model recognition for an automatic overload detection system based on the improved MMAL-Net
Efficient and reliable transportation of goods through trucks is crucial for road logistics. However, the overloading of trucks poses serious challenges to road infrastructure and traffic safety. Detecting and preventing truck overloading is of utmost importance for maintaining road conditions and ensuring the safety of both road users and goods transported. This paper introduces a novel method for detecting truck overloading. The method utilizes the improved MMAL-Net for truck model recognition. Vehicle identification involves using frontal and side truck images, while APPM is applied for local segmentation of the side image to recognize individual parts. The proposed method analyzes the captured images to precisely identify the models of trucks passing through automatic weighing stations on the highway. The improved MMAL-Net achieved an accuracy of 95.03% on the competitive benchmark dataset, Stanford Cars, demonstrating its superiority over other established methods. Furthermore, our method also demonstrated outstanding performance on a small-scale dataset. In our experimental evaluation, our method achieved a recognition accuracy of 85% when the training set consisted of 20 sets of photos, and it reached 100% as the training set gradually increased to 50 sets of samples. Through the integration of this recognition system with weight data obtained from weighing stations and license plates information, the method enables real-time assessment of truck overloading. The implementation of the proposed method is of vital importance for multiple aspects related to road traffic safety
Non-Verbal Communication Analysis in Victim-Offender Mediations
In this paper we present a non-invasive ambient intelligence framework for
the semi-automatic analysis of non-verbal communication applied to the
restorative justice field. In particular, we propose the use of computer vision
and social signal processing technologies in real scenarios of Victim-Offender
Mediations, applying feature extraction techniques to multi-modal
audio-RGB-depth data. We compute a set of behavioral indicators that define
communicative cues from the fields of psychology and observational methodology.
We test our methodology on data captured in real world Victim-Offender
Mediation sessions in Catalonia in collaboration with the regional government.
We define the ground truth based on expert opinions when annotating the
observed social responses. Using different state-of-the-art binary
classification approaches, our system achieves recognition accuracies of 86%
when predicting satisfaction, and 79% when predicting both agreement and
receptivity. Applying a regression strategy, we obtain a mean deviation for the
predictions between 0.5 and 0.7 in the range [1-5] for the computed social
signals.Comment: Please, find the supplementary video material at:
http://sunai.uoc.edu/~vponcel/video/VOMSessionSample.mp
Can recurrent neural networks learn process model structure?
Various methods using machine and deep learning have been proposed to tackle
different tasks in predictive process monitoring, forecasting for an ongoing
case e.g. the most likely next event or suffix, its remaining time, or an
outcome-related variable. Recurrent neural networks (RNNs), and more
specifically long short-term memory nets (LSTMs), stand out in terms of
popularity. In this work, we investigate the capabilities of such an LSTM to
actually learn the underlying process model structure of an event log. We
introduce an evaluation framework that combines variant-based resampling and
custom metrics for fitness, precision and generalization. We evaluate 4
hypotheses concerning the learning capabilities of LSTMs, the effect of
overfitting countermeasures, the level of incompleteness in the training set
and the level of parallelism in the underlying process model. We confirm that
LSTMs can struggle to learn process model structure, even with simplistic
process data and in a very lenient setup. Taking the correct anti-overfitting
measures can alleviate the problem. However, these measures did not present
themselves to be optimal when selecting hyperparameters purely on predicting
accuracy. We also found that decreasing the amount of information seen by the
LSTM during training, causes a sharp drop in generalization and precision
scores. In our experiments, we could not identify a relationship between the
extent of parallelism in the model and the generalization capability, but they
do indicate that the process' complexity might have impact
Heart Diseases Diagnosis Using Artificial Neural Networks
Information technology has virtually altered every aspect of human life in the present era. The application of informatics in the health sector is rapidly gaining prominence and the benefits of this innovative paradigm are being realized across the globe. This evolution produced large number of patients’ data that can be employed by computer technologies and machine learning techniques, and turned into useful information and knowledge. This data can be used to develop expert systems to help in diagnosing some life-threating diseases such as heart diseases, with less cost, processing time and improved diagnosis accuracy. Even though, modern medicine is generating huge amount of data every day, little has been done to use this available data to solve challenges faced in the successful diagnosis of heart diseases. Highlighting the need for more research into the usage of robust data mining techniques to help health care professionals in the diagnosis of heart diseases and other debilitating disease conditions.
Based on the foregoing, this thesis aims to develop a health informatics system for the classification of heart diseases using data mining techniques focusing on Radial Basis functions and emerging Neural Networks approach. The presented research involves three development stages; firstly, the development of a preliminary classification system for Coronary Artery Disease (CAD) using Radial Basis Function (RBF) neural networks. The research then deploys the deep learning approach to detect three different types of heart diseases i.e. Sleep Apnea, Arrhythmias and CAD by designing two novel classification systems; the first adopt a novel deep neural network method (with Rectified Linear unit activation) design as the second approach in this thesis and the other implements a novel multilayer kernel machine to mimic the behaviour of deep learning as the third approach. Additionally, this thesis uses a dataset obtained from patients, and employs normalization and feature extraction means to explore it in a unique way that facilitates its usage for training and validating different classification methods. This unique dataset is useful to researchers and practitioners working in heart disease treatment and diagnosis.
The findings from the study reveal that the proposed models have high classification performance that is comparable, or perhaps exceed in some cases, the existing automated and manual methods of heart disease diagnosis. Besides, the proposed deep-learning models provide better performance when applied on large data sets (e.g., in the case of Sleep Apnea), with reasonable performance with smaller data sets.
The proposed system for clinical diagnoses of heart diseases, contributes to the accurate detection of such disease, and could serve as an important tool in the area of clinic support system. The outcome of this study in form of implementation tool can be used by cardiologists to help them make more consistent diagnosis of heart diseases
- …