915 research outputs found
Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
We present Habitat 3.0: a simulation platform for studying collaborative
human-robot tasks in home environments. Habitat 3.0 offers contributions across
three dimensions: (1) Accurate humanoid simulation: addressing challenges in
modeling complex deformable bodies and diversity in appearance and motion, all
while ensuring high simulation speed. (2) Human-in-the-loop infrastructure:
enabling real human interaction with simulated robots via mouse/keyboard or a
VR interface, facilitating evaluation of robot policies with human input. (3)
Collaborative tasks: studying two collaborative tasks, Social Navigation and
Social Rearrangement. Social Navigation investigates a robot's ability to
locate and follow humanoid avatars in unseen environments, whereas Social
Rearrangement addresses collaboration between a humanoid and robot while
rearranging a scene. These contributions allow us to study end-to-end learned
and heuristic baselines for human-robot collaboration in-depth, as well as
evaluate them with humans in the loop. Our experiments demonstrate that learned
robot policies lead to efficient task completion when collaborating with unseen
humanoid agents and human partners that might exhibit behaviors that the robot
has not seen before. Additionally, we observe emergent behaviors during
collaborative task execution, such as the robot yielding space when obstructing
a humanoid agent, thereby allowing the effective completion of the task by the
humanoid agent. Furthermore, our experiments using the human-in-the-loop tool
demonstrate that our automated evaluation with humanoids can provide an
indication of the relative ordering of different policies when evaluated with
real human collaborators. Habitat 3.0 unlocks interesting new features in
simulators for Embodied AI, and we hope it paves the way for a new frontier of
embodied human-AI interaction capabilities.Comment: Project page: http://aihabitat.org/habitat
Mobile graphics: SIGGRAPH Asia 2017 course
Peer ReviewedPostprint (published version
A hybrid algorithm for Bayesian network structure learning with application to multi-label learning
We present a novel hybrid algorithm for Bayesian network structure learning,
called H2PC. It first reconstructs the skeleton of a Bayesian network and then
performs a Bayesian-scoring greedy hill-climbing search to orient the edges.
The algorithm is based on divide-and-conquer constraint-based subroutines to
learn the local structure around a target variable. We conduct two series of
experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is
currently the most powerful state-of-the-art algorithm for Bayesian network
structure learning. First, we use eight well-known Bayesian network benchmarks
with various data sizes to assess the quality of the learned structure returned
by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in
terms of goodness of fit to new data and quality of the network structure with
respect to the true dependence structure of the data. Second, we investigate
H2PC's ability to solve the multi-label learning problem. We provide
theoretical results to characterize and identify graphically the so-called
minimal label powersets that appear as irreducible factors in the joint
distribution under the faithfulness condition. The multi-label learning problem
is then decomposed into a series of multi-class classification problems, where
each multi-class variable encodes a label powerset. H2PC is shown to compare
favorably to MMHC in terms of global classification accuracy over ten
multi-label data sets covering different application domains. Overall, our
experiments support the conclusions that local structural learning with H2PC in
the form of local neighborhood induction is a theoretically well-motivated and
empirically effective learning framework that is well suited to multi-label
learning. The source code (in R) of H2PC as well as all data sets used for the
empirical tests are publicly available.Comment: arXiv admin note: text overlap with arXiv:1101.5184 by other author
Virtual Reality Applied to Welder Training
Welding is a challenging, risky, and time-consuming profession. Recently, there
has been a documented shortage of trained welders, and as a result, the market
is pushing for an increase in the rate at which new professionals are trained. To
address this growing demand, training institutions are exploring alternative methods to train future professionals with the goals of improving learner retention of
information, shortening training periods, and lowering associated expenses. The
emergence of virtual reality technologies has led to initiatives to explore their potential for welding training. Multiple studies have suggested that virtual reality
training delivers comparable, or even superior, results when compared to more conventional approaches, with shorter training times and reduced costs in consumables.
Additionally, virtual reality allows trainees to try out different approaches to their
work. The primary goal of this dissertation is to develop a virtual reality welding
simulator. To achieve this objective effectively, the creation of a classification system capable of identifying the simulator’s key characteristics becomes imperative.
Therefore, the secondary objective of this thesis is to develop a classification system
for the accurate evaluation and comparison of virtual reality welding simulators.
Regarding the virtual reality welding simulation, the HTC VIVE Pro 2 virtual
reality equipment was employed, to transfer the user’s action from the physical to the
virtual world. Within this virtual environment, it was introduced a suite of welding
tools and integrated a Smoothed Particle Hydrodynamics simulator to mimic the
weld creation. After conducting comprehensive testing that revealed certain limitations in welding quality and in the simulator performance, the project opted to
incorporate a Computational Fluid Dynamics (CFD) simulator. The development of
the CFD simulator proved to be a formidable challenge, and regrettably, its complete
implementation was unattainable. Nevertheless, the project delved into three distinct grid architectures, from these, the dynamic grid was ultimately implemented.
It also proficiently integrated two crucial solvers for the Navier-Stokes equations.
These functions were implemented in the Graphics Processing Unit (GPU), to improve their efficiency. Upon comparing GPU and Central Processing Unit (CPU)
performance, the project highlighted the substantial computational advantages of GPUs and the advantages it brings to fluid simulations.A soldadura é uma profissão exigente, perigosa e que requer um grande investimento
de tempo para alcançar resultados satisfatórios. Recentemente, tem sido registada
uma falta de profissionais qualificados na área da soldadura. Como resultado, o mer cado está a pressionar para um aumento do ritmo a que os novos trabalhadores são
formados. Para responder a esta crescente procura, as instituições de formação estão
a explorar métodos alternativos para formar futuros profissionais, com o objetivo de
melhorar a retenção de informação, encurtar os períodos de treino e reduzir as despe sas associadas. Com o desenvolvimento de tecnologias nas áreas de realidade virtual
e realidade aumentada, têm surgido iniciativas para explorar o potencial destas na
formação de soldadura. Vários estudos sugeriram que a formação em realidade virtual proporciona resultados comparáveis, ou mesmo superiores, aos de abordagens
mais convencionais, com tempos de formação mais curtos e reduções nos custos de
consumíveis. Além disso, a realidade virtual permite aos formandos experimentar
diferentes abordagens ao seu trabalho. O objetivo principal desta dissertação é o
desenvolvimento de um simulador de soldadura em realidade virtual. Para atingir este objetivo de forma eficaz, torna-se imperativa a criação de um sistema de
classificação capaz de identificar as características chave do simulador. Assim, o
objetivo secundário desta dissertação é desenvolver um sistema de classificação para
a avaliação e comparação precisas de simuladores de soldadura em realidade virtual.
Relativamente ao simulador de soldadura em realidade virtual, foi utilizado o
kit de realidade virtual HTC VIVE Pro 2, para transferir as ações do utilizador no
mundo físico para o mundo virtual. No ambiente virtual, foi introduzido um con junto de ferramentas de soldadura e integrado um simulador de Hidrodinâmica de
Partículas Suavizadas para simular a criação da solda. Após a realização de testes
exaustivos que revelaram algumas limitações na qualidade da solda e no desempenho
do simulador, o projeto optou por incorporar um simulador de Dinâmica de Fluidos
Computacional (CFD). O desenvolvimento do simulador CFD revelou-se um desa fio formidável e, infelizmente, não foi possível completar a sua implementação. No
entanto, o projeto aprofundou três arquiteturas de grelha distintas, das quais foi
implementada a grelha dinâmica. O projeto também implementou duas funções cru ciais para resolver as equações de Navier-Stokes. As funções relativas ao simulador
de fluidos foram implementadas na Unidade de Processamento Gráfico (GPU), a fim
de melhorar a sua eficiência. Ao comparar o desempenho da GPU com o da Unidade Central de Processamento (CPU), o projeto evidenciou os beneficios computacionais
das GPUs e as vantagens que trazem para as simulações de fluidos
A Neural Network Approach for Real-Time High-Dimensional Optimal Control
We propose a neural network approach for solving high-dimensional optimal
control problems arising in real-time applications. Our approach yields
controls in a feedback form, where the policy function is given by a neural
network (NN). Specifically, we fuse the Hamilton-Jacobi-Bellman (HJB) and
Pontryagin Maximum Principle (PMP) approaches by parameterizing the value
function with an NN. We can therefore synthesize controls in real-time without
having to solve an optimization problem. Once the policy function is trained,
generating a control at a given space-time location takes milliseconds; in
contrast, efficient nonlinear programming methods typically perform the same
task in seconds. We train the NN offline using the objective function of the
control problem and penalty terms that enforce the HJB equations. Therefore,
our training algorithm does not involve data generated by another algorithm. By
training on a distribution of initial states, we ensure the controls'
optimality on a large portion of the state-space. Our grid-free approach scales
efficiently to dimensions where grids become impractical or infeasible. We
demonstrate the effectiveness of our approach on several multi-agent
collision-avoidance problems in up to 150 dimensions. Furthermore, we
empirically observe that the number of parameters in our approach scales
linearly with the dimension of the control problem, thereby mitigating the
curse of dimensionality.Comment: 16 pages, 12 figures. This work has been submitted for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be availabl
Choose Your Weapon: Survival Strategies for Depressed AI Academics
Are you an AI researcher at an academic institution? Are you anxious you are
not coping with the current pace of AI advancements? Do you feel you have no
(or very limited) access to the computational and human resources required for
an AI research breakthrough? You are not alone; we feel the same way. A growing
number of AI academics can no longer find the means and resources to compete at
a global scale. This is a somewhat recent phenomenon, but an accelerating one,
with private actors investing enormous compute resources into cutting edge AI
research. Here, we discuss what you can do to stay competitive while remaining
an academic. We also briefly discuss what universities and the private sector
could do improve the situation, if they are so inclined. This is not an
exhaustive list of strategies, and you may not agree with all of them, but it
serves to start a discussion
Creating Discontinuous Innovation: The case of Nintendo's Wii
Master'sMASTER OF ENGINEERIN
Interactive Sound Propagation for Massive Multi-user and Dynamic Virtual Environments
Hearing is an important sense and it is known that rendering sound effects can enhance the level of immersion in virtual environments. Modeling sound waves is a complex problem, requiring vast computing resources to solve accurately. Prior methods are restricted to static scenes or limited acoustic effects. In this thesis, we present methods to improve the quality and performance of interactive geometric sound propagation in dynamic scenes and precomputation algorithms for acoustic propagation in enormous multi-user virtual environments. We present a method for finding edge diffraction propagation paths on arbitrary 3D scenes for dynamic sources and receivers. Using this algorithm, we present a unified framework for interactive simulation of specular reflections, diffuse reflections, diffraction scattering, and reverberation effects. We also define a guidance algorithm for ray tracing that responds to dynamic environments and reorders queries to minimize simulation time. Our approach works well on modern GPUs and can achieve more than an order of magnitude performance improvement over prior methods. Modern multi-user virtual environments support many types of client devices, and current phones and mobile devices may lack the resources to run acoustic simulations. To provide such devices the benefits of sound simulation, we have developed a precomputation algorithm that efficiently computes and stores acoustic data on a server in the cloud. Using novel algorithms, the server can render enhanced spatial audio in scenes spanning several square kilometers for hundreds of clients in realtime. Our method provides the benefits of immersive audio to collaborative telephony, video games, and multi-user virtual environments.Doctor of Philosoph
Performance Evaluation of Priority Queues for Fine-Grained Parallel Tasks on GPUs
Graphics processing units (GPUs) are increasingly applied to accelerate tasks such as graph problems and discreteevent simulation that are characterized by irregularity, i.e., a strong dependence of the control flow and memory accesses on the input. The core data structure in many of these irregular tasks are priority queues that guide the progress of the computations and which can easily become the bottleneck of an application. To our knowledge, currently no systematic comparison of priority queue implementations on GPUs exists in the literature. We close this gap by a performance evaluation of GPU-based priority queue implementations for two applications: discrete-event simulation and parallel A* path searches on grids. We focus on scenarios requiring large numbers of priority queues holding up to a few thousand items each. We present performance measurements covering linear queue designs, implicit binary heaps, splay trees, and a GPU-specific proposal from the literature. The measurement results show that up to about 500 items per queue, circular buffers frequently outperform tree-based queues for the considered applications, particularly under a simple parallelization of individual item enqueue operations. We analyze profiling metrics to explore classical queue designs in light of the importance of high hardware utilization as well as homogeneous computations and memory accesses across GPU threads
- …