1,080 research outputs found
Unveiling the frontiers of deep learning: innovations shaping diverse domains
Deep learning (DL) enables the development of computer models that are
capable of learning, visualizing, optimizing, refining, and predicting data. In
recent years, DL has been applied in a range of fields, including audio-visual
data processing, agriculture, transportation prediction, natural language,
biomedicine, disaster management, bioinformatics, drug design, genomics, face
recognition, and ecology. To explore the current state of deep learning, it is
necessary to investigate the latest developments and applications of deep
learning in these disciplines. However, the literature is lacking in exploring
the applications of deep learning in all potential sectors. This paper thus
extensively investigates the potential applications of deep learning across all
major fields of study as well as the associated benefits and challenges. As
evidenced in the literature, DL exhibits accuracy in prediction and analysis,
makes it a powerful computational tool, and has the ability to articulate
itself and optimize, making it effective in processing data with no prior
training. Given its independence from training data, deep learning necessitates
massive amounts of data for effective analysis and processing, much like data
volume. To handle the challenge of compiling huge amounts of medical,
scientific, healthcare, and environmental data for use in deep learning, gated
architectures like LSTMs and GRUs can be utilized. For multimodal learning,
shared neurons in the neural network for all activities and specialized neurons
for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table
Integration of Assistive Technologies into 3D Simulations: Exploratory Studies
Virtual worlds and environments have many purposes, ranging from games to scientific research. However, universal accessibility features in such virtual environments are limited. As the impairment prevalence rate increases yearly, so does the research interests in the field of assistive technologies. This work introduces research in assistive technologies and presents three software developments that explore the integration of assistive technologies within virtual environments, with a strong focus on Brain-Computer Interfaces. An accessible gaming system, a hands-free navigation software system, and a Brain-Computer Interaction plugin have been developed to study the capabilities of accessibility features within virtual 3D environments. Details of the specification, design, and implementation of these software applications are presented in the thesis. Observations and preliminary results as well as directions of future work are also included
Text Analytics: the convergence of Big Data and Artificial Intelligence
The analysis of the text content in emails, blogs,
tweets, forums and other forms of textual communication
constitutes what we call text analytics. Text analytics is applicable
to most industries: it can help analyze millions of emails; you can
analyze customers’ comments and questions in forums; you can
perform sentiment analysis using text analytics by measuring
positive or negative perceptions of a company, brand, or product.
Text Analytics has also been called text mining, and is a subcategory
of the Natural Language Processing (NLP) field, which is one of the
founding branches of Artificial Intelligence, back in the 1950s, when
an interest in understanding text originally developed. Currently
Text Analytics is often considered as the next step in Big Data
analysis. Text Analytics has a number of subdivisions: Information
Extraction, Named Entity Recognition, Semantic Web annotated
domain’s representation, and many more. Several techniques are
currently used and some of them have gained a lot of attention,
such as Machine Learning, to show a semisupervised enhancement
of systems, but they also present a number of limitations which
make them not always the only or the best choice. We conclude
with current and near future applications of Text Analytics
Crowd-supervised training of spoken language systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 155-166).Spoken language systems are often deployed with static speech recognizers. Only rarely are parameters in the underlying language, lexical, or acoustic models updated on-the-fly. In the few instances where parameters are learned in an online fashion, developers traditionally resort to unsupervised training techniques, which are known to be inferior to their supervised counterparts. These realities make the development of spoken language interfaces a difficult and somewhat ad-hoc engineering task, since models for each new domain must be built from scratch or adapted from a previous domain. This thesis explores an alternative approach that makes use of human computation to provide crowd-supervised training for spoken language systems. We explore human-in-the-loop algorithms that leverage the collective intelligence of crowds of non-expert individuals to provide valuable training data at a very low cost for actively deployed spoken language systems. We also show that in some domains the crowd can be incentivized to provide training data for free, as a byproduct of interacting with the system itself. Through the automation of crowdsourcing tasks, we construct and demonstrate organic spoken language systems that grow and improve without the aid of an expert. Techniques that rely on collecting data remotely from non-expert users, however, are subject to the problem of noise. This noise can sometimes be heard in audio collected from poor microphones or muddled acoustic environments. Alternatively, noise can take the form of corrupt data from a worker trying to game the system - for example, a paid worker tasked with transcribing audio may leave transcripts blank in hopes of receiving a speedy payment. We develop strategies to mitigate the effects of noise in crowd-collected data and analyze their efficacy. This research spans a number of different application domains of widely-deployed spoken language interfaces, but maintains the common thread of improving the speech recognizer's underlying models with crowd-supervised training algorithms. We experiment with three central components of a speech recognizer: the language model, the lexicon, and the acoustic model. For each component, we demonstrate the utility of a crowd-supervised training framework. For the language model and lexicon, we explicitly show that this framework can be used hands-free, in two organic spoken language systems.by Ian C. McGraw.Ph.D
- …