620,979 research outputs found
Recommended from our members
Large-Scale Multi-Agent Transport: Theory, Algorithms and Analysis
The problem of transport of multi-agent systems has received much attention in a wide range of engineering and biological contexts, such as spatial coverage optimization, collective migration, estimation and mapping of unknown environments. In particular, the emphasis has been on the search for scalable decentralized algorithms that are applicable to large-scale multi-agent systems.For large multi-agent collectives, it is appropriate to describe the configuration of the collective and its evolution using macroscopic quantities, while actuation rests at the microscopic scale at the level of individual agents. Moreover, the control problem faces a multitude of information constraints imposed by the multi-agent setting, such as limitations in sensing, communication and localization. Viewed in this way, the problem naturally extends across scales and this motivates a search for algorithms that respect information constraints at the microscopic level while guaranteeing performance at the macroscopic level.We address the above concerns in this dissertation on three fronts: theory, algorithms and analysis. We begin with the development of a multiscale theory of gradient descent-based multi-agent transport that bridges the microscopic and macroscopic perspectives and sets out a general framework for the design and analysis of decentralized algorithms for transport. We then consider the problem of optimal transport of multi-agent systems, wherein the objective is the minimization of the net cost of transport under constraints of distributed computation. This is followed by a treatment of multi-agent transport under constraints on sensing and communication, in the absence of location information, where we study the problem of self-organization in swarms of agents. Motivated by the problem of multi-agent navigation and tracking of moving targets, we then present a study of moving-horizon estimation of nonlinear systems viewed as a transport of probability measures. Finally, we investigate the robustness of multi-agent networks to agent failure, via the problem of identifying critical nodes in large-scale networks
Developments and experimental evaluation of partitioning algorithms for adaptive computing systems
Multi-FPGA systems offer the potential to deliver higher performance solutions than traditional computers for some low-level computing tasks. This requires a flexible hardware substrate and an automated mapping system. CHAMPION is an automated mapping system for implementing image processing applications in multi-FPGA systems under development at the University of Tennessee. CHAMPION will map applications in the Khoros Cantata graphical programming environment to hardware. The work described in this dissertation involves the automation of the CHAMPION backend design flow, which includes the partitioning problem, netlist to structural VHDL conversion, synthesis and placement and routing, and host code generation. The primary goal is to investigate the development and evaluation of three different k-way partitioning approaches. In the first and the second approaches, we discuss the development and implementation of two existing algorithms. The first approach is a hierarchical partitioning method based on topological ordering (HP). The second approach is a recursive algorithm based on the Fiduccia and Mattheyses bipartitioning heuristic (RP). We extend these algorithms to handle the multiple constraints imposed by adaptive computing systems. We also introduce a new recursive partitioning method based on topological ordering and levelization (RPL). In addition to handling the partitioning constraints, the new approach efficiently addresses the problem of minimizing the number of FPGAs used and the amount of computation, thereby overcoming some of the weaknesses of the HP and RP algorithms
Spatial Aggregation: Theory and Applications
Visual thinking plays an important role in scientific reasoning. Based on the
research in automating diverse reasoning tasks about dynamical systems,
nonlinear controllers, kinematic mechanisms, and fluid motion, we have
identified a style of visual thinking, imagistic reasoning. Imagistic reasoning
organizes computations around image-like, analogue representations so that
perceptual and symbolic operations can be brought to bear to infer structure
and behavior. Programs incorporating imagistic reasoning have been shown to
perform at an expert level in domains that defy current analytic or numerical
methods. We have developed a computational paradigm, spatial aggregation, to
unify the description of a class of imagistic problem solvers. A program
written in this paradigm has the following properties. It takes a continuous
field and optional objective functions as input, and produces high-level
descriptions of structure, behavior, or control actions. It computes a
multi-layer of intermediate representations, called spatial aggregates, by
forming equivalence classes and adjacency relations. It employs a small set of
generic operators such as aggregation, classification, and localization to
perform bidirectional mapping between the information-rich field and
successively more abstract spatial aggregates. It uses a data structure, the
neighborhood graph, as a common interface to modularize computations. To
illustrate our theory, we describe the computational structure of three
implemented problem solvers -- KAM, MAPS, and HIPAIR --- in terms of the
spatial aggregation generic operators by mixing and matching a library of
commonly used routines.Comment: See http://www.jair.org/ for any accompanying file
REFORM: Removing False Correlation in Multi-level Interaction for CTR Prediction
Click-through rate (CTR) prediction is a critical task in online advertising
and recommendation systems, as accurate predictions are essential for user
targeting and personalized recommendations. Most recent cutting-edge methods
primarily focus on investigating complex implicit and explicit feature
interactions. However, these methods neglect the issue of false correlations
caused by confounding factors or selection bias. This problem is further
magnified by the complexity and redundancy of these interactions. We propose a
CTR prediction framework that removes false correlation in multi-level feature
interaction, termed REFORM. The proposed REFORM framework exploits a wide range
of multi-level high-order feature representations via a two-stream stacked
recurrent structure while eliminating false correlations. The framework has two
key components: I. The multi-level stacked recurrent (MSR) structure enables
the model to efficiently capture diverse nonlinear interactions from feature
spaces of different levels, and the richer representations lead to enhanced CTR
prediction accuracy. II. The false correlation elimination (FCE) module further
leverages Laplacian kernel mapping and sample reweighting methods to eliminate
false correlations concealed within the multi-level features, allowing the
model to focus on the true causal effects. Extensive experiments based on four
challenging CTR datasets and our production dataset demonstrate that the
proposed REFORM model achieves state-of-the-art performance. Codes, models and
our dataset will be released at https://github.com/yansuoyuli/REFORM.Comment: 9 pages, 5 figure
Improving programmability and performance for scientific applications
With modern advancements in hardware and software technology scaling towards new limits, our compute machines are reaching new potentials to tackle more challenging problems. While the size and complexity of both the problems and solutions increases, the programming methodologies must remain at a level that can be understood by programmers and scientists alike. In our work, this problem is encountered when developing an optimized framework to best exploit the semantic properties of a finite-element solver. In efforts to address this problem, we explore programming and runtime models which decouple algorithmic complexity, parallelism concerns, and hardware mapping. We build upon these frameworks to exploit domain-specific semantics using high-level transformations and modifications to obtain performance through algorithmic and runtime optimizations.
We first discusses optimizations performed on a computational mechanics solver using a novel coupling technique for multi-time scale methods for discrete finite element domains. We exploit domain semantics using a high-level dynamic runtime scheme to reorder and balance workloads to greatly improve runtime performance. The framework presented automatically chooses a near-optimal coupling solution and runs a work-stealing parallel executor to run effectively on multi-core systems.
In my latter work, I focus on the parallel programming model, Concurrent Collections (CnC), to seamlessly bridge the gap between performance and programmability. Because challenging problems in various domains, not limited to computation mechanics, requires both domain expertise and programming prowess, there is a need for ways to separate those concerns. This thesis describes methods and techniques to obtain scalable performance using CnC programming while limiting the burden of programming. These high level techniques are presented for two high-performance applications corresponding to hydrodynamics and multi-grid solvers
Optimization of bit interleaved coded modulation using genetic algorithms
Modern wireless communication systems must be optimized with respect to both bandwidth efficiency and energy efficiency. A common approach to achieve these goals is to use multi-level modulation such as quadrature-amplitude modulation (QAM) for bandwidth efficiency and an error-control code for energy efficiency. In benign additive white Gaussian noise (AWGN) channels, Ungerboeck proposed trellis-coded modulation (TCM), which combines modulation and coding into a joint operation. However, in fading channels, it is important to maximize diversity. As shown by Zehavi, diversity is maximized by performing coding and modulation separately and interleaving bits that are passed from the encoder to the modulator. Such systems are termed BICM for bit-interleaved coded modulation. Later, Li and Ritcey proposed a method for improving the performance of BICM systems by iteratively passing information between the demodulator and decoder. Such systems are termed BICM-ID , for BICM with Iterative Decoding. The bit error rate (BER) curve of a typical BICM-ID system is characterized by a steeply sloping waterfall region followed by an error floor with a gradual slope.;This thesis is focused on optimizing BICM-ID systems in the error floor region. The problem of minimizing the error bound is formulated as an instance of the Quadratic Assignment Problem (QAP) and solved using a genetic algorithm. First, an optimization is performed by fixing the modulation and varying the bit-to-symbol mapping. This approach provides the lowest possible error floor for a BICM-ID system using standard QAM and phase-shift keying (PSK) modulations. Next, the optimization is performed by varying not only the bit-to-symbol mapping, but also the location of the signal points within the two-dimensional constellation. This provides an error floor that is lower than that achieved with the best QAM and PSK systems, although at the cost of a delayed waterfall region
Criteria for consciousness in artificial intelligent agents
Proceeding of: Adaptive Learning Agents and Multi-Agent Systems, ALAMAS+ALAg 2008 â Workshop at AAMAS 2008, Estoril, May, 12, 2008, Portugal.Accurately testing for consciousness is still an unsolved problem when applied to humans and other mammals. The inherent subjective nature of conscious experience makes it virtually unreachable to classic empirical approaches. Therefore, alternative strategies based on behavior analysis and neurobiological studies are being developed in order to determine the level of consciousness of biological organisms. However, these methods cannot be directly applied to artificial systems. In this paper we propose both a taxonomy and some functional criteria that can be used to assess the level of consciousness of an artificial intelligent agent. Furthermore, a list of measurable levels of artificial consciousness, ConsScale, is defined as a tool to determine the potential level of consciousness of an agent. Both the mapping of consciousness to AI and the role of consciousness in cognition are controversial and unsolved questions, in this paper we aim to approach these issues with the notions of I-Consciousness and embodied intelligence.This research has been supported by the Spanish Ministry of Education and Science under project TRA2007-67374-C02-02.Publicad
SegMap: 3D Segment Mapping using Data-Driven Descriptors
When performing localization and mapping, working at the level of structure
can be advantageous in terms of robustness to environmental changes and
differences in illumination. This paper presents SegMap: a map representation
solution to the localization and mapping problem based on the extraction of
segments in 3D point clouds. In addition to facilitating the computationally
intensive task of processing 3D point clouds, working at the level of segments
addresses the data compression requirements of real-time single- and
multi-robot systems. While current methods extract descriptors for the single
task of localization, SegMap leverages a data-driven descriptor in order to
extract meaningful features that can also be used for reconstructing a dense 3D
map of the environment and for extracting semantic information. This is
particularly interesting for navigation tasks and for providing visual feedback
to end-users such as robot operators, for example in search and rescue
scenarios. These capabilities are demonstrated in multiple urban driving and
search and rescue experiments. Our method leads to an increase of area under
the ROC curve of 28.3% over current state of the art using eigenvalue based
features. We also obtain very similar reconstruction capabilities to a model
specifically trained for this task. The SegMap implementation will be made
available open-source along with easy to run demonstrations at
www.github.com/ethz-asl/segmap. A video demonstration is available at
https://youtu.be/CMk4w4eRobg
- âŠ