769 research outputs found
Semantically Enhanced Software Traceability Using Deep Learning Techniques
In most safety-critical domains the need for traceability is prescribed by
certifying bodies. Trace links are generally created among requirements,
design, source code, test cases and other artifacts, however, creating such
links manually is time consuming and error prone. Automated solutions use
information retrieval and machine learning techniques to generate trace links,
however, current techniques fail to understand semantics of the software
artifacts or to integrate domain knowledge into the tracing process and
therefore tend to deliver imprecise and inaccurate results. In this paper, we
present a solution that uses deep learning to incorporate requirements artifact
semantics and domain knowledge into the tracing solution. We propose a tracing
network architecture that utilizes Word Embedding and Recurrent Neural Network
(RNN) models to generate trace links. Word embedding learns word vectors that
represent knowledge of the domain corpus and RNN uses these word vectors to
learn the sentence semantics of requirements artifacts. We trained 360
different configurations of the tracing network using existing trace links in
the Positive Train Control domain and identified the Bidirectional Gated
Recurrent Unit (BI-GRU) as the best model for the tracing task. BI-GRU
significantly out-performed state-of-the-art tracing methods including the
Vector Space Model and Latent Semantic Indexing.Comment: 2017 IEEE/ACM 39th International Conference on Software Engineering
(ICSE
Modeling Financial Time Series with Artificial Neural Networks
Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001
Electronic Cooling via Interlayer Coulomb Coupling in Multilayer Epitaxial Graphene
In van der Waals bonded or rotationally disordered multilayer stacks of
two-dimensional (2D) materials, the electronic states remain tightly confined
within individual 2D layers. As a result, electron-phonon interactions occur
primarily within layers and interlayer electrical conductivities are low. In
addition, strong covalent in-plane intralayer bonding combined with weak van
der Waals interlayer bonding results in weak phonon-mediated thermal coupling
between the layers. We demonstrate here, however, that Coulomb interactions
between electrons in different layers of multilayer epitaxial graphene provide
an important mechanism for interlayer thermal transport even though all
electronic states are strongly confined within individual 2D layers. This
effect is manifested in the relaxation dynamics of hot carriers in ultrafast
time-resolved terahertz spectroscopy. We develop a theory of interlayer Coulomb
coupling containing no free parameters that accounts for the experimentally
observed trends in hot-carrier dynamics as temperature and the number of layers
is varied.Comment: 54 pages, 15 figures, uses documentclass{achemso}, M.T.M. and J.R.T.
contributed equally to this wor
Intelligent component selection
Component-based software engineering (CBSE) provides solutions to the development of complex and evolving systems. As these systems are created and maintained, the task of selecting components is repeated. The context-driven component evaluation (CdCE) project is developing strategies and techniques for automating a repeatable process for assessing software components. This paper describes our work using artificial intelligence (AI) techniques to classify components based on an ideal component specification. Using AI we are able to represent dependencies between attributes, overcoming some of the limitations of existing aggregation-based approaches to component selection
Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening
The last hundred years have seen a monumental rise in the power and capability of machines to
perform intelligent tasks in the stead of previously human operators. This rise is not expected
to slow down any time soon and what this means for society and humanity as a whole remains
to be seen. The overwhelming notion is that with the right goals in mind, the growing influence
of machines on our every day tasks will enable humanity to give more attention to the truly
groundbreaking challenges that we all face together. This will usher in a new age of human
machine collaboration in which humans and machines may work side by side to achieve greater
heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of
intelligent systems come to the fore in complex systems where the interaction between humans
and machines can be made seamless, and it is this goal of symbiosis between human and machine
that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven
methods have come to the fore and now represent the state-of-the-art in many different
fields. Alongside the shift from rule-based towards data-driven methods we have also seen a
shift in how humans interact with these technologies. Human computer interaction is changing
in response to data-driven methods and new techniques must be developed to enable the same
symbiosis between man and machine for data-driven methods as for previous formula-driven
technology.
We address five key challenges which need to be overcome for data-driven human-in-the-loop
computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine
existing work and form a taxonomy of the different methods being utilised for data-driven
human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must
communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity
Challenge’ where the aim of reasoned communication becomes increasingly important as the
complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in
which we look at how complex methods can be separated in order to provide greater reasoning
in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the
assumptions around bottleneck creation for the development of supervised learning methods.Open Acces
The Security Blanket of the Chat World: An Analytic Evaluation and a User Study of Telegram
The computer security community has advocated
widespread adoption of secure communication tools to protect
personal privacy. Several popular communication tools have
adopted end-to-end encryption (e.g., WhatsApp, iMessage), or
promoted security features as selling points (e.g., Telegram,
Signal). However, previous studies have shown that users may
not understand the security features of the tools they are using,
and may not be using them correctly. In this paper, we present a
study of Telegram using two complementary methods: (1) a labbased
user study (11 novices and 11 Telegram users), and (2) a
hybrid analytical approach combining cognitive walk-through
and heuristic evaluation to analyse Telegram’s user interface.
Participants who use Telegram feel secure because they feel
they are using a secure tool, but in reality Telegram offers
limited security benefits to most of its users. Most participants
develop a habit of using the less secure default chat mode at all
times. We also uncover several user interface design issues that
impact security, including technical jargon, inconsistent use of
terminology, and making some security features clear and others
not. For instance, use of the end-to-end-encrypted Secret Chat
mode requires both the sender and recipient be online at the same
time, and Secret Chat does not support group conversations
The Design Process and Usability Assessment of an Exergame System to Facilitate Strength for Task Training for Lower Limb Stroke Rehabilitation
Successful stroke rehabilitation relies on early, long-term, repetitive and intensive treatment, which is rarely adhered to by patients. Exergames can increase patients’ engagement with their therapy. Marketed exergaming systems for lower limb rehabilitation are hard to find and, none yet, facilitate Strength for Task Training (STT), a novel physiotherapeutic method for stroke rehabilitation. STT involves performing brief but intensive strength training (priming) prior to task-specific training to promote neural plasticity and maximize the gains in locomotor ability. This research investigates how the design of an exergame system (game and game controller) for lower limb stroke rehabilitation can facilitate unsupervised STT and therefore allow stroke patients to care for their own health. The findings suggest that specific elements of STT can be incorporated in an exergame system. Barriers to use can be reduced through considering the diverse physiological and cognitive abilities of patients and aesthetic consideration can help create a meaningful system than promotes its use in the home. The semantics of form and movement play an essential role for stroke patients to be able to carry out their exercises
An Overview about Emerging Technologies of Autonomous Driving
Since DARPA started Grand Challenges in 2004 and Urban Challenges in 2007,
autonomous driving has been the most active field of AI applications. This
paper gives an overview about technical aspects of autonomous driving
technologies and open problems. We investigate the major fields of self-driving
systems, such as perception, mapping and localization, prediction, planning and
control, simulation, V2X and safety etc. Especially we elaborate on all these
issues in a framework of data closed loop, a popular platform to solve the long
tailed autonomous driving problems
Recommended from our members
Development of an integrated genome informatics, data management and workflow infrastructure: a toolbox for the study of complex disease genetics.
The genetic dissection of complex disease remains a significant challenge. Sample-tracking and the recording, processing and storage of high-throughput laboratory data with public domain data, require integration of databases, genome informatics and genetic analyses in an easily updated and scaleable format. To find genes involved in multifactorial diseases such as type 1 diabetes (T1D), chromosome regions are defined based on functional candidate gene content, linkage information from humans and animal model mapping information. For each region, genomic information is extracted from Ensembl, converted and loaded into ACeDB for manual gene annotation. Homology information is examined using ACeDB tools and the gene structure verified. Manually curated genes are extracted from ACeDB and read into the feature database, which holds relevant local genomic feature data and an audit trail of laboratory investigations. Public domain information, manually curated genes, polymorphisms, primers, linkage and association analyses, with links to our genotyping database, are shown in Gbrowse. This system scales to include genetic, statistical, quality control (QC) and biological data such as expression analyses of RNA or protein, all linked from a genomics integrative display. Our system is applicable to any genetic study of complex disease, of either large or small scale.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are
- …