8,805 research outputs found
Security and Privacy Problems in Voice Assistant Applications: A Survey
Voice assistant applications have become omniscient nowadays. Two models that
provide the two most important functions for real-life applications (i.e.,
Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR)
models and Speaker Identification (SI) models. According to recent studies,
security and privacy threats have also emerged with the rapid development of
the Internet of Things (IoT). The security issues researched include attack
techniques toward machine learning models and other hardware components widely
used in voice assistant applications. The privacy issues include technical-wise
information stealing and policy-wise privacy breaches. The voice assistant
application takes a steadily growing market share every year, but their privacy
and security issues never stopped causing huge economic losses and endangering
users' personal sensitive information. Thus, it is important to have a
comprehensive survey to outline the categorization of the current research
regarding the security and privacy problems of voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three
types of privacy threats in the papers published in the top-tier conferences of
cyber security and voice domain.Comment: 5 figure
Evaluation of image quality and reconstruction parameters in recent PET-CT and PET-MR systems
In this PhD dissertation, we propose to evaluate the impact of using different PET isotopes for
the National Electrical Manufacturers Association (NEMA) tests performance evaluation of the
GE Signa integrated PET/MR. The methods were divided into three closely related categories:
NEMA performance measurements, system modelling and evaluation of the image quality of
the state-of-the-art of clinical PET scanners. NEMA performance measurements for
characterizing spatial resolution, sensitivity, image quality, the accuracy of attenuation and
scatter corrections, and noise equivalent count rate (NECR) were performed using clinically
relevant and commercially available radioisotopes. Then we modelled the GE Signa integrated
PET/MR system using a realistic GATE Monte Carlo simulation and validated it with the result of
the NEMA measurements (sensitivity and NECR). Next, the effect of the 3T MR field on the
positron range was evaluated for F-18, C-11, O-15, N-13, Ga-68 and Rb-82. Finally, to evaluate the image
quality of the state-of-the-art clinical PET scanners, a noise reduction study was performed
using a Bayesian Penalized-Likelihood reconstruction algorithm on a time-of-flight PET/CT
scanner to investigate whether and to what extent noise can be reduced. The outcome of this
thesis will allow clinicians to reduce the PET dose which is especially relevant for young
patients. Besides, the Monte Carlo simulation platform for PET/MR developed for this thesis will
allow physicists and engineers to better understand and design integrated PET/MR systems
Forested buffers in agricultural landscapes : mitigation effects on stream–riparian meta-ecosystems
Stream–riparian meta-ecosystems are strongly connected through exchanges of
energy, material and organisms. Land use can disrupt ecological connectivity by
affecting community composition directly and/or indirectly by altering the instream
and riparian habitats that support biological structure and function. Although
forested riparian buffers are increasingly used as a management intervention, our
understanding of their effects on the functioning of stream–riparian metaecosystems
is limited. This study assessed patterns in the longitudinal and lateral
profiles of streams in modified landscapes across Europe and Sweden using a pairedreach
approach, with upstream unbuffered reaches lacking woody riparian
vegetation and with downstream reaches having well-developed forested buffers.
The presence of buffers was positively associated with stream ecological status as
well as important attributes, which included instream shading and the provision of
suitable habitats for instream and riparian communities, thus supporting more
aquatic insects (especially EPT taxa). Emergence of aquatic insects is particularly
important because they mediate reciprocal flows of subsidies into terrestrial systems.
Results of fatty acid analysis and prey DNA from spiders further supported the
importance of buffers in providing more aquatic-derived quality food (i.e. essential
fatty acids) for riparian spiders. Findings presented in this thesis show that buffers
contribute to the strengthening of cross-ecosystem connectivity and have the
potential to affect a wide range of consumers in modified landscapes
Recommended from our members
Machine Learning for Gravitational-Wave Astronomy: Methods and Applications for High-Dimensional Laser Interferometry Data
Gravitational-wave astronomy is an emerging field in observational astrophysics concerned with the study of gravitational signals proposed to exist nearly a century ago by Albert Einstein but only recently confirmed to exist. Such signals were theorized to result from astronomical events such as the collisions of black holes, but they were long thought to be too faint to measure on Earth. In recent years, the construction of extremely sensitive detectors—including the Laser Interferometer Gravitational-Wave Observatory (LIGO) project—has enabled the first direct detections of these gravitational waves, corroborating the theory of general relativity and heralding a new era of astrophysics research.
As a result of their extraordinary sensitivity, the instruments used to study gravitational waves are also subject to noise that can significantly limit their ability to detect the signals of interest with sufficient confidence. The detectors continuously record more than 200,000 time series of auxiliary data describing the state of a vast array of internal components and sensors, the environmental state in and around the detector, and so on. This data offers significant value for understanding the nearly innumerable potential sources of noise and ultimately reducing or eliminating them, but it is clearly impossible to monitor, let alone understand, so much information manually. The field of machine learning offers a variety of techniques well-suited to problems of this nature.
In this thesis, we develop and present several machine learning–based approaches to automate the process of extracting insights from the vast, complex collection of data recorded by LIGO detectors. We introduce a novel problem formulation for transient noise detection and show for the first time how an efficient and interpretable machine learning method can accurately identify detector noise using all of these auxiliary data channels but without observing the noise itself. We present further work employing more sophisticated neural network–based models, demonstrating how they can reduce error rates by over 60% while also providing LIGO scientists with interpretable insights into the detector’s behavior. We also illustrate the methods’ utility by demonstrating their application to a specific, recurring type of transient noise; we show how we can achieve a classification accuracy of over 97% while also independently corroborating the results of previous manual investigations into the origins of this type of noise.
The methods and results presented in the following chapters are applicable not only to the specific gravitational-wave data considered but also to a broader family of machine learning problems involving prediction from similarly complex, high-dimensional data containing only a few relevant components in a sea of irrelevant information. We hope this work proves useful to astrophysicists and other machine learning practitioners seeking to better understand gravitational waves, extremely complex and precise engineered systems, or any of the innumerable extraordinary phenomena of our civilization and universe
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
Annals [...].
Pedometrics: innovation in tropics; Legacy data: how turn it useful?; Advances in soil sensing; Pedometric guidelines to systematic soil surveys.Evento online. Coordenado por: Waldir de Carvalho Junior, Helena Saraiva Koenow Pinheiro, Ricardo Simão Diniz Dalmolin
Optimizing transcriptomics to study the evolutionary effect of FOXP2
The field of genomics was established with the sequencing of the human genome, a pivotal achievement that has allowed us to address various questions in biology from a unique perspective. One question in particular, that of the evolution of human speech, has gripped philosophers, evolutionary biologists, and now genomicists. However, little is known of the genetic basis that allowed humans to evolve the ability to speak. Of the few genes implicated in human speech, one of the most studied is FOXP2, which encodes for the transcription factor Forkhead box protein P2 (FOXP2). FOXP2 is essential for proper speech development and two mutations in the human lineage are believed to have contributed to the evolution of human speech. To address the effect of FOXP2 and investigate its evolutionary contribution to human speech, one can utilize the power of genomics, more specifically gene expression analysis via ribonucleic acid sequencing (RNA-seq).
To this end, I first contributed in developing mcSCRB-seq, a highly sensitive, powerful, and efficient single cell RNA-seq (scRNA-seq) protocol. Previously having emerged as a central method for studying cellular heterogeneity and identifying cellular processes, scRNA-seq was a powerful genomic tool but lacked the sensitivity and cost-efficiency of more established protocols. By systematically evaluating each step of the process, I helped find that the addition of polyethylene glycol increased sensitivity by enhancing the cDNA synthesis reaction. This, along with other optimizations resulted in developing a sensitive and flexible protocol that is cost-efficient and ideal in many research settings.
A primary motivation driving the extensive optimizations surrounding single cell transcriptomics has been the generation of cellular atlases, which aim to identify and characterize all of the cells in an organism. As such efforts are carried out in a variety of research groups using a number of different RNA-seq protocols, I contributed in an effort to benchmark and standardize scRNA-seq methods. This not only identified methods which may be ideal for the purpose of cell atlas creation, but also highlighted optimizations that could be integrated into existing protocols.
Using mcSCRB-seq as a foundation as well as the findings from the scRNA-seq benchmarking, I helped develop prime-seq, a sensitive, robust, and most importantly, affordable bulk RNA-seq protocol. Bulk RNA-seq was frequently overlooked during the efforts to optimize and establish single-cell techniques, even though the method is still extensively used in analyzing gene expression. Introducing early barcoding and reducing library generation costs kept prime-seq cost-efficient, but basing it off of single-cell methods ensured that it would be a sensitive and powerful technique. I helped verify this by benchmarking it against TruSeq generated data and then helped test the robustness by generating prime-seq libraries from over seventeen species. These optimizations resulted in a final protocol that is well suited for investigating gene expression in comprehensive and high-throughput studies.
Finally, I utilized prime-seq in order to develop a comprehensive gene expression atlas to study the function of FOXP2 and its role in speech evolution. I used previously generated mouse models: a knockout model containing one non-functional Foxp2 allele and a humanized model, which has a variant Foxp2 allele with two human-specific mutations. To study the effect globally across the mouse, I helped harvest eighteen tissues which were previously identified to express FOXP2. By then comparing the mouse models to wild-type mice, I helped highlight the importance of FOXP2 within lung development and the importance of the human variant allele in the brain.
Both mcSCRB-seq and prime-seq have already been used and published in numerous studies to address a variety of biological and biomedical questions. Additionally, my work on FOXP2 not only provides a thorough expression atlas, but also provides a detailed and cost-efficient plan for undertaking a similar study on other genes of interest. Lastly, the studies on FOXP2 done within this work, lay the foundation for future studies investigating the role of FOXP2 in modulating learning behavior, and thereby affecting human speech
Behavior prediction of traffic actors for intelligent vehicle using artificial intelligence techniques: A review
Intelligent vehicle technology has made tremendous progress due to Artificial Intelligence (AI) techniques. Accurate behavior prediction of surrounding traffic actors is essential for the safe and secure navigation of the intelligent vehicle. Minor misbehavior of these vehicles on the busy roads may lead to an accident. Due to this, there is a need for vehicle behavior research work in today's era. This research article reviews traffic actors' behavior prediction techniques for intelligent vehicles to perceive, infer, and anticipate other vehicles' intentions and future actions. It identifies the key strategies and methods for AI, emerging trends, datasets, and ongoing research issues in these fields. As per the authors' knowledge, this is the first systematic literature review dedicated to the vehicle behavior study examining existing academic literature published by peer review venues between 2011 and 2021. A systematic review was undertaken to examine these papers, and five primary research questions have been addressed. The findings show that using sophisticated input representation that includes traffic rules and road geometry, artificial intelligence-based solutions applied to behavior prediction of traffic actors for intelligent vehicles have shown promising success, particularly in complex driving scenarios. Finally, the paper summarizes the most widely used approaches in behavior prediction of traffic actors for intelligent vehicles, which the authors believe serves as a foundation for future research in behavior prediction of surrounding traffic actors for secure and accurate intelligent vehicle navigation
Recommended from our members
Reliable Decision-Making with Imprecise Models
The rapid growth in the deployment of autonomous systems across various sectors has generated considerable interest in how these systems can operate reliably in large, stochastic, and unstructured environments. Despite recent advances in artificial intelligence and machine learning, it is challenging to assure that autonomous systems will operate reliably in the open world. One of the causes of unreliable behavior is the impreciseness of the model used for decision-making. Due to the practical challenges in data collection and precise model specification, autonomous systems often operate based on models that do not represent all the details in the environment. Even if the system has access to a comprehensive decision-making model that accounts for all the details in the environment and all possible scenarios the agent may encounter, it may be intractable to solve this complex model optimally. Consequently, this complex, high fidelity model may be simplified to accelerate planning, introducing imprecision. Reasoning with such imprecise models affects the reliability of autonomous systems. A system\u27s actions may sometimes produce unexpected, undesirable consequences, which are often identified after deployment. How can we design autonomous systems that can operate reliably in the presence of uncertainty and model imprecision?
This dissertation presents solutions to address three classes of model imprecision in a Markov decision process, along with an analysis of the conditions under which bounded-performance can be guaranteed. First, an adaptive outcome selection approach is introduced to devise risk-aware reduced models of the environment that efficiently balance the trade-off between model simplicity and fidelity, to accelerate planning in resource-constrained settings. Second, a framework that extends stochastic shortest path framework to problems with imperfect information about the goal state during planning is introduced, along with two solution approaches to solve this problem. Finally, two complementary solution approaches are presented to minimize the negative side effects of agent actions. The techniques presented in this dissertation enable an autonomous system to detect and mitigate undesirable behavior, without redesigning the model entirely
- …