62 research outputs found
Detection of sparse targets with structurally perturbed echo
Cataloged from PDF version of article.In this paper, a novel algorithm is proposed to achieve robust high resolution detection in sparse multipath channels. Currently used sparse reconstruction techniques are not immediately applicable in multipath channel modeling. Performance of standard compressed sensing formulations based on discretization of the multipath channel parameter space degrade significantly when the actual channel parameters deviate from the assumed discrete set of values. To alleviate this off-grid problem, we make use of the particle swarm optimization (PSO) to perturb each grid point that reside in each multipath component cluster. Orthogonal matching pursuit (OMP) is used to reconstruct sparse multipath components in a greedy fashion. Extensive simulation results quantify the performance gain and robustness obtained by the proposed algorithm against the off-grid problem faced in sparse multipath channels. © 2013 Elsevier Inc
Neural Networks: Training and Application to Nonlinear System Identification and Control
This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise
Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data
The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
On the Design, Implementation and Application of Novel Multi-disciplinary Techniques for explaining Artificial Intelligence Models
284 p.Artificial Intelligence is a non-stopping field of research that has experienced some incredible growth lastdecades. Some of the reasons for this apparently exponential growth are the improvements incomputational power, sensing capabilities and data storage which results in a huge increment on dataavailability. However, this growth has been mostly led by a performance-based mindset that has pushedmodels towards a black-box nature. The performance prowess of these methods along with the risingdemand for their implementation has triggered the birth of a new research field. Explainable ArtificialIntelligence. As any new field, XAI falls short in cohesiveness. Added the consequences of dealing withconcepts that are not from natural sciences (explanations) the tumultuous scene is palpable. This thesiscontributes to the field from two different perspectives. A theoretical one and a practical one. The formeris based on a profound literature review that resulted in two main contributions: 1) the proposition of anew definition for Explainable Artificial Intelligence and 2) the creation of a new taxonomy for the field.The latter is composed of two XAI frameworks that accommodate in some of the raging gaps found field,namely: 1) XAI framework for Echo State Networks and 2) XAI framework for the generation ofcounterfactual. The first accounts for the gap concerning Randomized neural networks since they havenever been considered within the field of XAI. Unfortunately, choosing the right parameters to initializethese reservoirs falls a bit on the side of luck and past experience of the scientist and less on that of soundreasoning. The current approach for assessing whether a reservoir is suited for a particular task is toobserve if it yields accurate results, either by handcrafting the values of the reservoir parameters or byautomating their configuration via an external optimizer. All in all, this poses tough questions to addresswhen developing an ESN for a certain application, since knowing whether the created structure is optimalfor the problem at hand is not possible without actually training it. However, some of the main concernsfor not pursuing their application is related to the mistrust generated by their black-box" nature. Thesecond presents a new paradigm to treat counterfactual generation. Among the alternatives to reach auniversal understanding of model explanations, counterfactual examples is arguably the one that bestconforms to human understanding principles when faced with unknown phenomena. Indeed, discerningwhat would happen should the initial conditions differ in a plausible fashion is a mechanism oftenadopted by human when attempting at understanding any unknown. The search for counterfactualsproposed in this thesis is governed by three different objectives. Opposed to the classical approach inwhich counterfactuals are just generated following a minimum distance approach of some type, thisframework allows for an in-depth analysis of a target model by means of counterfactuals responding to:Adversarial Power, Plausibility and Change Intensity
Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation
Peer reviewe
Learning natural coding conventions
Coding conventions are ubiquitous in software engineering practice. Maintaining a uniform
coding style allows software development teams to communicate through code by
making the code clear and, thus, readable and maintainable—two important properties
of good code since developers spend the majority of their time maintaining software
systems. This dissertation introduces a set of probabilistic machine learning models
of source code that learn coding conventions directly from source code written in a
mostly conventional style. This alleviates the coding convention enforcement problem,
where conventions need to first be formulated clearly into unambiguous rules and then
be coded in order to be enforced; a tedious and costly process.
First, we introduce the problem of inferring a variable’s name given its usage context
and address this problem by creating Naturalize — a machine learning framework
that learns to suggest conventional variable names. Two machine learning models, a
simple n-gram language model and a specialized neural log-bilinear context model are
trained to understand the role and function of each variable and suggest new stylistically
consistent variable names. The neural log-bilinear model can even suggest previously
unseen names by composing them from subtokens (i.e. sub-components of code identifiers).
The suggestions of the models achieve 90% accuracy when suggesting variable
names at the top 20% most confident locations, rendering the suggestion system usable
in practice.
We then turn our attention to the significantly harder method naming problem.
Learning to name methods, by looking only at the code tokens within their body, requires
a good understating of the semantics of the code contained in a single method.
To achieve this, we introduce a novel neural convolutional attention network that learns
to generate the name of a method by sequentially predicting its subtokens. This is
achieved by focusing on different parts of the code and potentially directly using body
(sub)tokens even when they have never been seen before. This model achieves an F1
score of 51% on the top five suggestions when naming methods of real-world open-source
projects.
Learning about naming code conventions uses the syntactic structure of the code
to infer names that implicitly relate to code semantics. However, syntactic similarities
and differences obscure code semantics. Therefore, to capture features of semantic
operations with machine learning, we need methods that learn semantic continuous
logical representations. To achieve this ambitious goal, we focus our investigation on
logic and algebraic symbolic expressions and design a neural equivalence network architecture
that learns semantic vector representations of expressions in a syntax-driven
way, while solely retaining semantics. We show that equivalence networks learn significantly
better semantic vector representations compared to other, existing, neural
network architectures.
Finally, we present an unsupervised machine learning model for mining syntactic
and semantic code idioms. Code idioms are conventional “mental chunks” of code that
serve a single semantic purpose and are commonly used by practitioners. To achieve
this, we employ Bayesian nonparametric inference on tree substitution grammars. We
present a wide range of evidence that the resulting syntactic idioms are meaningful,
demonstrating that they do indeed recur across software projects and that they occur
more frequently in illustrative code examples collected from a Q&A site. These syntactic
idioms can be used as a form of automatic documentation of coding practices
of a programming language or an API. We also mine semantic loop idioms, i.e. highly
abstracted but semantic-preserving idioms of loop operations. We show that semantic
idioms provide data-driven guidance during the creation of software engineering tools
by mining common semantic patterns, such as candidate refactoring locations. This
gives data-based evidence to tool, API and language designers about general, domain
and project-specific coding patterns, who instead of relying solely on their intuition, can
use semantic idioms to achieve greater coverage of their tool or new API or language
feature. We demonstrate this by creating a tool that suggests loop refactorings into
functional constructs in LINQ. Semantic loop idioms also provide data-driven evidence
for introducing new APIs or programming language features
- …