3,420 research outputs found
Towards an automatic system for monitoring of CN2 and wind speed profiles with GeMS
Wide Field Adaptive Optics (WFAO) systems represent the more sophisticated AO
systems available today at large telescopes. A critical aspect for these WFAO
systems in order to deliver an optimised performance is the knowledge of the
vertical spatiotemporal distribution of the CN2 and the wind speed. Previous
studies (Cortes et al., 2012) already proved the ability of GeMS (the Gemini
Multi-Conjugated AO system) in retrieving CN2 and wind vertical stratification
using the telemetry data. To assess the reliability of the GeMS wind speed
estimates a preliminary study (Neichel et al., 2014) compared wind speed
retrieved from GeMS with that obtained with the atmospherical model Meso-Nh on
a small sample of nights providing promising results. The latter technique is
very reliable for the wind speed vertical stratification. The model outputs
gave, indeed, an excellent agreement with a large sample of radiosoundings (~
50) both in statistical terms and on individual flights (Masciadri et al.,
2013). Such a tool can therefore be used as a valuable reference in this
exercise of cross calibrating GeMS on-sky wind estimates with model
predictions. In this contribution we achieved a two-fold results: (1) we
extended analysis on a much richer statistical sample (~ 43 nights), we
confirmed the preliminary results and we found an even better correlation
between GeMS observations and the atmospherical model with basically no cases
of not-negligible uncertainties; (2) we evaluate the possibility to use, as an
input for GeMS, the Meso-Nh estimates of the wind speed stratification in an
operational configuration. Under this configuration these estimates can be
provided many hours in advanced with respect to the observations and with a
very high temporal frequency (order of 2 minutes or less).Comment: 12 pages, 7 figures, Proc. SPIE 9909 "Adaptive Optics Systems V",
99093B, 201
CONDITION-BASED RISK ASSESSMENT STRATEGY AND ITS HEALTH INDICATOR WITH APPLICATION TO PUMPS AND COMPRESSORS
Large rotating machinery, such as centrifugal gas compressors and pumps, are widely applied as crucial components in the petrochemical industries. To enable in-time and effective maintenance of these machines, the concept of a health indicator is arousing great interest.
A suitable health indicator indicates the overall health of the machinery and it is closely related to maintenance strategies and decision-making. It can be obtained either from near misses and incident data, or from real-time measured data. However, the existing health indicators have some limitations. On the one hand, the near misses and incident data may have been obtained from similar systems, reflecting population characteristics but not fully accounting for the individual features of the target system. On the other hand, the existing health indicators that use condition monitoring data, mainly focused on detecting incipient faults, and usually do not include financial cost factors when calculating the indicators. Therefore, there is the requirement to develop a single system "Health Indicator", that can show the health condition of a system in real-time, as well as the likely financial loss incurred when a fault is detected in the system, to assist operators on maintenance decision making. This project has developed such an integrated health indicator for rotating machinery.
The integrated health indicator described in this thesis is extracted from a novel condition-based risk assessment strategy, which can be regarded as an integration of risk-based maintenance with improved conventional condition-based maintenance, with financial factors taken into account. The value of the health indicator is that it directly illustrates the risk to the system (or equipment), including likely financial loss, which makes it easier for operators to select the optimal time for maintenance or set alarm thresholds given the specific conditions in their companies or plants.
This thesis provides a guide to set up an integrated maintenance model for large rotating machinery. It provides a useful reference for researchers working on condition-based fault detection and dynamic risk-based maintenance
MRK 1216 & NGC 1277 - An orbit-based dynamical analysis of compact, high velocity dispersion galaxies
We present a dynamical analysis to infer the structural parameters and
properties of the two nearby, compact, high velocity dispersion galaxies
MRK1216 & NGC1277. Combining deep HST imaging, wide-field IFU stellar
kinematics, and complementary long-slit spectroscopic data out to 3 R_e, we
construct orbit-based models to constrain their black hole masses, dark matter
content and stellar mass-to-light ratios. We obtain a black hole mass of
log(Mbh/Msun) = 10.1(+0.1/-0.2) for NGC1277 and an upper limit of log(Mbh/Msun)
= 10.0 for MRK1216, within 99.7 per cent confidence. The stellar mass-to-light
ratios span a range of Upsilon_V = 6.5(+1.5/-1.5) in NGC1277 and Upsilon_H =
1.8(+0.5/-0.8) in MRK1216 and are in good agreement with SSP models of a single
power-law Salpeter IMF. Even though our models do not place strong constraints
on the dark halo parameters, they suggest that dark matter is a necessary
ingredient in MRK1216, with a dark matter contribution of 22(+30/-20) per cent
to the total mass budget within 1 R_e. NGC1277, on the other hand, can be
reproduced without the need for a dark halo, and a maximal dark matter fraction
of 13 per cent within the same radial extent. In addition, we investigate the
orbital structures of both galaxies, which are rotationally supported and
consistent with photometric multi-S\'ersic decompositions, indicating that
these compact objects do not host classical, non-rotating bulges formed during
recent (z <= 2) dissipative events or through violent relaxation. Finally, both
MRK 1216 and NGC 1277 are anisotropic, with a global anisotropy parameter delta
of 0.33 and 0.58, respectively. While MRK 1216 follows the trend of
fast-rotating, oblate galaxies with a flattened velocity dispersion tensor in
the meridional plane of the order of beta_z = delta, NGC 1277 is highly
tangentially anisotropic and seems to belong kinematically to a distinct class
of objects.Comment: 27 pages, 15 figures and 4 tables. Accepted for publication in MNRA
Lasso Monte Carlo, a Novel Method for High Dimensional Uncertainty Quantification
Uncertainty quantification (UQ) is an active area of research, and an
essential technique used in all fields of science and engineering. The most
common methods for UQ are Monte Carlo and surrogate-modelling. The former
method is dimensionality independent but has slow convergence, while the latter
method has been shown to yield large computational speedups with respect to
Monte Carlo. However, surrogate models suffer from the so-called curse of
dimensionality, and become costly to train for high-dimensional problems, where
UQ might become computationally prohibitive. In this paper we present a new
technique, Lasso Monte Carlo (LMC), which combines surrogate models and the
multilevel Monte Carlo technique, in order to perform UQ in high-dimensional
settings, at a reduced computational cost. We provide mathematical guarantees
for the unbiasedness of the method, and show that LMC can converge faster than
simple Monte Carlo. The theory is numerically tested with benchmarks on toy
problems, as well as on a real example of UQ from the field of nuclear
engineering. In all presented examples LMC converges faster than simple Monte
Carlo, and computational costs are reduced by more than a factor of 5 in some
cases
DIRA: Dynamic Domain Incremental Regularised Adaptation
Autonomous systems (AS) often use Deep Neural Network (DNN) classifiers to
allow them to operate in complex, high-dimensional, non-linear, and dynamically
changing environments. Due to the complexity of these environments, DNN
classifiers may output misclassifications during operation when they face
domains not identified during development. Removing a system from operation for
retraining becomes impractical as the number of such AS increases. To increase
AS reliability and overcome this limitation, DNN classifiers need to have the
ability to adapt during operation when faced with different operational domains
using a few samples (e.g. 100 samples). However, retraining DNNs on a few
samples is known to cause catastrophic forgetting. In this paper, we introduce
Dynamic Incremental Regularised Adaptation (DIRA), a framework for operational
domain adaption of DNN classifiers using regularisation techniques to overcome
catastrophic forgetting and achieve adaptation when retraining using a few
samples of the target domain. Our approach shows improvements on different
image classification benchmarks aimed at evaluating robustness to distribution
shifts (e.g.CIFAR-10C/100C, ImageNet-C), and produces state-of-the-art
performance in comparison with other frameworks from the literature
Can data reliability of low-cost sensor devices for indoor air particulate matter monitoring be improved?-An approach using machine learning
Poor indoor air quality has adverse health impacts. Children are considered a risk group, and they spend a significant time indoors at home and in schools. Air quality monitoring has traditionally been limited due to the cost and size of the monitoring stations. Recent advancements in low-cost sensors technology allow for economical, scalable and real-time monitoring, which is especially helpful in monitoring air quality in indoor environments, as they are prone to sudden peaks in pollutant concentrations. However, data reliability is still a considerable challenge to overcome in low-cost sensors technology. Thus, following a monitoring campaign in a nursery and primary school in Porto urban area, the present study analyzed the performance of three commercially available low-cost IoT devices for indoor air quality monitoring in real-world against a research-grade device used as a reference and developed regression models to improve their reliability. This paper also presents the developed on-field calibration models via machine learning technique using multiple linear regression, support vector regression, and gradient boosting regression algorithms and focuses on particulate matter (PM1, PM2.5, PM10) data collected by the devices. The performance evaluation results showed poor detection of particulates in classrooms by the low-cost devices compared to the reference. The on-field calibration algorithms showed a considerable improvement in all three devices' accuracy (reaching up to R2 > 0.9) for the light scattering technology based particulate matter sensors. The results also show the different performance of low-cost devices in the lunchroom compared to the classrooms of the same school building, indicating the need for calibration in different microenvironments
Supervised machine learning algorithms for ground motion time series classification from InSAR data
The increasing availability of Synthetic Aperture Radar (SAR) images facilitates the genera- tion of rich Differential Interferometric SAR (DInSAR) data. Temporal analysis of DInSAR products, and in particular deformation Time Series (TS), enables advanced investigations for ground deforma- tion identification. Machine Learning algorithms offer efficient tools for classifying large volumes of data. In this study, we train supervised Machine Learning models using 5000 reference samples of three datasets to classify DInSAR TS in five deformation trends: Stable, Linear, Quadratic, Bilinear, and Phase Unwrapping Error. General statistics and advanced features are also computed from TS to assess the classification performance. The proposed methods reported accuracy values greater than 0.90, whereas the customized features significantly increased the performance. Besides, the importance of customized features was analysed in order to identify the most effective features in TS classification. The proposed models were also tested on 15000 unlabelled data and compared to a model-based method to validate their reliability. Random Forest and Extreme Gradient Boosting could accurately classify reference samples and positively assign correct labels to random samples. This study indicates the efficiency of Machine Learning models in the classification and management of DInSAR TSs, along with shortcomings of the proposed models in classification of nonmoving targets (i.e., false alarm rate) and a decreasing accuracy for shorter TS.This work is part of the Spanish Grant SARAI, PID2020-116540RB-C21, funded by MCIN/ AEI/10.13039/501100011033. Additionally, it has been supported by the European Regional Devel- opment Fund (ERDF) through the project “RISKCOAST” (SOE3/P4/E0868) of the Interreg SUDOE Programme. Additionally, this work has been co-funded by the European Union Civil Protection through the H2020 project RASTOOL (UCPM-2021-PP-101048474).Peer ReviewedPostprint (published version
Machine learning for the prediction of psychosocial outcomes in acquired brain injury
Acquired brain injury (ABI) can be a life changing condition, affecting housing, independence, and employment. Machine learning (ML) is increasingly used as a method to predict ABI outcomes, however improper model evaluation poses a potential bias to initially promising findings (Chapter One). This study aimed to evaluate, with transparent reporting, three common ML classification methods. Regularised logistic regression with elastic net, random forest and linear kernel support vector machine were compared with unregularised logistic regression to predict good psychosocial outcomes after discharge from ABI inpatient neurorehabilitation using routine cognitive, psychometric and clinical admission assessments. Outcomes were selected on the basis of decision making for care packages: accommodation status, functional participation, supervision needs, occupation and quality of life. The primary outcome was accommodation (n = 164), with models internally validated using repeated nested cross-validation. Random forest was statistically superior to logistic regression for every outcome with areas under the receiver operating characteristic curve (AUC) ranging from 0.81 (95% confidence interval 0.77-0.85) for the primary outcome of accommodation, to its lowest performance for predicting occupation status with an AUC of 0.72 (0.69-0.76). The worst performing ML algorithm was support vector machine, only having statistically superior performance to logistic regression for one outcome, supervision needs, with an AUC of 0.75 (0.71-0.80). Unregularised logistic regression models were poorly calibrated compared to ML indicating severe overfitting, unlikely to perform well in new samples. Overall, ML can predict psychosocial outcomes using routine psychosocial admission data better than other statistical methods typically used by psychologists
- …