16,386 research outputs found
Security and Privacy Problems in Voice Assistant Applications: A Survey
Voice assistant applications have become omniscient nowadays. Two models that
provide the two most important functions for real-life applications (i.e.,
Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR)
models and Speaker Identification (SI) models. According to recent studies,
security and privacy threats have also emerged with the rapid development of
the Internet of Things (IoT). The security issues researched include attack
techniques toward machine learning models and other hardware components widely
used in voice assistant applications. The privacy issues include technical-wise
information stealing and policy-wise privacy breaches. The voice assistant
application takes a steadily growing market share every year, but their privacy
and security issues never stopped causing huge economic losses and endangering
users' personal sensitive information. Thus, it is important to have a
comprehensive survey to outline the categorization of the current research
regarding the security and privacy problems of voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three
types of privacy threats in the papers published in the top-tier conferences of
cyber security and voice domain.Comment: 5 figure
Machine Learning Applications in Studying Mental Health Among Immigrants and Racial and Ethnic Minorities: A Systematic Review
Background: The use of machine learning (ML) in mental health (MH) research
is increasing, especially as new, more complex data types become available to
analyze. By systematically examining the published literature, this review aims
to uncover potential gaps in the current use of ML to study MH in vulnerable
populations of immigrants, refugees, migrants, and racial and ethnic
minorities.
Methods: In this systematic review, we queried Google Scholar for ML-related
terms, MH-related terms, and a population of a focus search term strung
together with Boolean operators. Backward reference searching was also
conducted. Included peer-reviewed studies reported using a method or
application of ML in an MH context and focused on the populations of interest.
We did not have date cutoffs. Publications were excluded if they were narrative
or did not exclusively focus on a minority population from the respective
country. Data including study context, the focus of mental healthcare, sample,
data type, type of ML algorithm used, and algorithm performance was extracted
from each.
Results: Our search strategies resulted in 67,410 listed articles from Google
Scholar. Ultimately, 12 were included. All the articles were published within
the last 6 years, and half of them studied populations within the US. Most
reviewed studies used supervised learning to explain or predict MH outcomes.
Some publications used up to 16 models to determine the best predictive power.
Almost half of the included publications did not discuss their cross-validation
method.
Conclusions: The included studies provide proof-of-concept for the potential
use of ML algorithms to address MH concerns in these special populations, few
as they may be. Our systematic review finds that the clinical application of
these models for classifying and predicting MH disorders is still under
development
Wav2code: Restore Clean Speech Representations via Codebook Lookup for Noise-Robust ASR
Automatic speech recognition (ASR) has gained a remarkable success thanks to
recent advances of deep learning, but it usually degrades significantly under
real-world noisy conditions. Recent works introduce speech enhancement (SE) as
front-end to improve speech quality, which is proved effective but may not be
optimal for downstream ASR due to speech distortion problem. Based on that,
latest works combine SE and currently popular self-supervised learning (SSL) to
alleviate distortion and improve noise robustness. Despite the effectiveness,
the speech distortion caused by conventional SE still cannot be completely
eliminated. In this paper, we propose a self-supervised framework named
Wav2code to implement a generalized SE without distortions for noise-robust
ASR. First, in pre-training stage the clean speech representations from SSL
model are sent to lookup a discrete codebook via nearest-neighbor feature
matching, the resulted code sequence are then exploited to reconstruct the
original clean representations, in order to store them in codebook as prior.
Second, during finetuning we propose a Transformer-based code predictor to
accurately predict clean codes by modeling the global dependency of input noisy
representations, which enables discovery and restoration of high-quality clean
representations without distortions. Furthermore, we propose an interactive
feature fusion network to combine original noisy and the restored clean
representations to consider both fidelity and quality, resulting in even more
informative features for downstream ASR. Finally, experiments on both synthetic
and real noisy datasets demonstrate that Wav2code can solve the speech
distortion and improve ASR performance under various noisy conditions,
resulting in stronger robustness.Comment: 12 pages, 7 figures, Submitted to IEEE/ACM TASL
In-situ crack and keyhole pore detection in laser directed energy deposition through acoustic signal and deep learning
Cracks and keyhole pores are detrimental defects in alloys produced by laser
directed energy deposition (LDED). Laser-material interaction sound may hold
information about underlying complex physical events such as crack propagation
and pores formation. However, due to the noisy environment and intricate signal
content, acoustic-based monitoring in LDED has received little attention. This
paper proposes a novel acoustic-based in-situ defect detection strategy in
LDED. The key contribution of this study is to develop an in-situ acoustic
signal denoising, feature extraction, and sound classification pipeline that
incorporates convolutional neural networks (CNN) for online defect prediction.
Microscope images are used to identify locations of the cracks and keyhole
pores within a part. The defect locations are spatiotemporally registered with
acoustic signal. Various acoustic features corresponding to defect-free
regions, cracks, and keyhole pores are extracted and analysed in time-domain,
frequency-domain, and time-frequency representations. The CNN model is trained
to predict defect occurrences using the Mel-Frequency Cepstral Coefficients
(MFCCs) of the lasermaterial interaction sound. The CNN model is compared to
various classic machine learning models trained on the denoised acoustic
dataset and raw acoustic dataset. The validation results shows that the CNN
model trained on the denoised dataset outperforms others with the highest
overall accuracy (89%), keyhole pore prediction accuracy (93%), and AUC-ROC
score (98%). Furthermore, the trained CNN model can be deployed into an
in-house developed software platform for online quality monitoring. The
proposed strategy is the first study to use acoustic signals with deep learning
for insitu defect detection in LDED process.Comment: 36 Pages, 16 Figures, accepted at journal Additive Manufacturin
Bayesian networks for disease diagnosis: What are they, who has used them and how?
A Bayesian network (BN) is a probabilistic graph based on Bayes' theorem,
used to show dependencies or cause-and-effect relationships between variables.
They are widely applied in diagnostic processes since they allow the
incorporation of medical knowledge to the model while expressing uncertainty in
terms of probability. This systematic review presents the state of the art in
the applications of BNs in medicine in general and in the diagnosis and
prognosis of diseases in particular. Indexed articles from the last 40 years
were included. The studies generally used the typical measures of diagnostic
and prognostic accuracy: sensitivity, specificity, accuracy, precision, and the
area under the ROC curve. Overall, we found that disease diagnosis and
prognosis based on BNs can be successfully used to model complex medical
problems that require reasoning under conditions of uncertainty.Comment: 22 pages, 5 figures, 1 table, Student PhD first pape
Semantic Segmentation Enhanced Transformer Model for Human Attention Prediction
Saliency Prediction aims to predict the attention distribution of human eyes
given an RGB image. Most of the recent state-of-the-art methods are based on
deep image feature representations from traditional CNNs. However, the
traditional convolution could not capture the global features of the image well
due to its small kernel size. Besides, the high-level factors which closely
correlate to human visual perception, e.g., objects, color, light, etc., are
not considered. Inspired by these, we propose a Transformer-based method with
semantic segmentation as another learning objective. More global cues of the
image could be captured by Transformer. In addition, simultaneously learning
the object segmentation simulates the human visual perception, which we would
verify in our investigation of human gaze control in cognitive science. We
build an extra decoder for the subtask and the multiple tasks share the same
Transformer encoder, forcing it to learn from multiple feature spaces. We find
in practice simply adding the subtask might confuse the main task learning,
hence Multi-task Attention Module is proposed to deal with the feature
interaction between the multiple learning targets. Our method achieves
competitive performance compared to other state-of-the-art methods
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
Path integrals and stochastic calculus
Path integrals are a ubiquitous tool in theoretical physics. However, their
use is sometimes hindered by the lack of control on various manipulations --
such as performing a change of the integration path -- one would like to carry
out in the light-hearted fashion that physicists enjoy. Similar issues arise in
the field of stochastic calculus, which we review to prepare the ground for a
proper construction of path integrals. At the level of path integration, and in
arbitrary space dimension, we not only report on existing Riemannian
geometry-based approaches that render path integrals amenable to the standard
rules of calculus, but also bring forth new routes, based on a fully
time-discretized approach, that achieve the same goal. We illustrate these
various definitions of path integration on simple examples such as the
diffusion of a particle on a sphere.Comment: 96 pages, 4 figures. New title, expanded introduction and additional
references. Version accepted in Advandes in Physic
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
Assessing performance of artificial neural networks and re-sampling techniques for healthcare datasets.
Re-sampling methods to solve class imbalance problems have shown to improve classification accuracy by mitigating the bias introduced by differences in class size. However, it is possible that a model which uses a specific re-sampling technique prior to Artificial neural networks (ANN) training may not be suitable for aid in classifying varied datasets from the healthcare industry. Five healthcare-related datasets were used across three re-sampling conditions: under-sampling, over-sampling and combi-sampling. Within each condition, different algorithmic approaches were applied to the dataset and the results were statistically analysed for a significant difference in ANN performance. The combi-sampling condition showed that four out of the five datasets did not show significant consistency for the optimal re-sampling technique between the f1-score and Area Under the Receiver Operating Characteristic Curve performance evaluation methods. Contrarily, the over-sampling and under-sampling condition showed all five datasets put forward the same optimal algorithmic approach across performance evaluation methods. Furthermore, the optimal combi-sampling technique (under-, over-sampling and convergence point), were found to be consistent across evaluation measures in only two of the five datasets. This study exemplifies how discrete ANN performances on datasets from the same industry can occur in two ways: how the same re-sampling technique can generate varying ANN performance on different datasets, and how different re-sampling techniques can generate varying ANN performance on the same dataset
- …