10,861 research outputs found
Positive Difference Distribution for Image Outlier Detection using Normalizing Flows and Contrastive Data
Detecting test data deviating from training data is a central problem for
safe and robust machine learning. Likelihoods learned by a generative model,
e.g., a normalizing flow via standard log-likelihood training, perform poorly
as an outlier score. We propose to use an unlabelled auxiliary dataset and a
probabilistic outlier score for outlier detection. We use a self-supervised
feature extractor trained on the auxiliary dataset and train a normalizing flow
on the extracted features by maximizing the likelihood on in-distribution data
and minimizing the likelihood on the contrastive dataset. We show that this is
equivalent to learning the normalized positive difference between the
in-distribution and the contrastive feature density. We conduct experiments on
benchmark datasets and compare to the likelihood, the likelihood ratio and
state-of-the-art anomaly detection methods
CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society
The rapid advancement of conversational and chat-based language models has
led to remarkable progress in complex task-solving. However, their success
heavily relies on human input to guide the conversation, which can be
challenging and time-consuming. This paper explores the potential of building
scalable techniques to facilitate autonomous cooperation among communicative
agents and provide insight into their "cognitive" processes. To address the
challenges of achieving autonomous cooperation, we propose a novel
communicative agent framework named role-playing. Our approach involves using
inception prompting to guide chat agents toward task completion while
maintaining consistency with human intentions. We showcase how role-playing can
be used to generate conversational data for studying the behaviors and
capabilities of chat agents, providing a valuable resource for investigating
conversational language models. Our contributions include introducing a novel
communicative agent framework, offering a scalable approach for studying the
cooperative behaviors and capabilities of multi-agent systems, and
open-sourcing our library to support research on communicative agents and
beyond. The GitHub repository of this project is made publicly available on:
https://github.com/lightaime/camel
INR-V: A Continuous Representation Space for Video-based Generative Tasks
Generating videos is a complex task that is accomplished by generating a set
of temporally coherent images frame-by-frame. This limits the expressivity of
videos to only image-based operations on the individual video frames needing
network designs to obtain temporally coherent trajectories in the underlying
image space. We propose INR-V, a video representation network that learns a
continuous space for video-based generative tasks. INR-V parameterizes videos
using implicit neural representations (INRs), a multi-layered perceptron that
predicts an RGB value for each input pixel location of the video. The INR is
predicted using a meta-network which is a hypernetwork trained on neural
representations of multiple video instances. Later, the meta-network can be
sampled to generate diverse novel videos enabling many downstream video-based
generative tasks. Interestingly, we find that conditional regularization and
progressive weight initialization play a crucial role in obtaining INR-V. The
representation space learned by INR-V is more expressive than an image space
showcasing many interesting properties not possible with the existing works.
For instance, INR-V can smoothly interpolate intermediate videos between known
video instances (such as intermediate identities, expressions, and poses in
face videos). It can also in-paint missing portions in videos to recover
temporally coherent full videos. In this work, we evaluate the space learned by
INR-V on diverse generative tasks such as video interpolation, novel video
generation, video inversion, and video inpainting against the existing
baselines. INR-V significantly outperforms the baselines on several of these
demonstrated tasks, clearly showcasing the potential of the proposed
representation space.Comment: Published in Transactions on Machine Learning Research (10/2022);
https://openreview.net/forum?id=aIoEkwc2o
ChatABL: Abductive Learning via Natural Language Interaction with ChatGPT
Large language models (LLMs) such as ChatGPT have recently demonstrated
significant potential in mathematical abilities, providing valuable reasoning
paradigm consistent with human natural language. However, LLMs currently have
difficulty in bridging perception, language understanding and reasoning
capabilities due to incompatibility of the underlying information flow among
them, making it challenging to accomplish tasks autonomously. On the other
hand, abductive learning (ABL) frameworks for integrating the two abilities of
perception and reasoning has seen significant success in inverse decipherment
of incomplete facts, but it is limited by the lack of semantic understanding of
logical reasoning rules and the dependence on complicated domain knowledge
representation. This paper presents a novel method (ChatABL) for integrating
LLMs into the ABL framework, aiming at unifying the three abilities in a more
user-friendly and understandable manner. The proposed method uses the strengths
of LLMs' understanding and logical reasoning to correct the incomplete logical
facts for optimizing the performance of perceptual module, by summarizing and
reorganizing reasoning rules represented in natural language format. Similarly,
perceptual module provides necessary reasoning examples for LLMs in natural
language format. The variable-length handwritten equation deciphering task, an
abstract expression of the Mayan calendar decoding, is used as a testbed to
demonstrate that ChatABL has reasoning ability beyond most existing
state-of-the-art methods, which has been well supported by comparative studies.
To our best knowledge, the proposed ChatABL is the first attempt to explore a
new pattern for further approaching human-level cognitive ability via natural
language interaction with ChatGPT
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Semantic Segmentation Enhanced Transformer Model for Human Attention Prediction
Saliency Prediction aims to predict the attention distribution of human eyes
given an RGB image. Most of the recent state-of-the-art methods are based on
deep image feature representations from traditional CNNs. However, the
traditional convolution could not capture the global features of the image well
due to its small kernel size. Besides, the high-level factors which closely
correlate to human visual perception, e.g., objects, color, light, etc., are
not considered. Inspired by these, we propose a Transformer-based method with
semantic segmentation as another learning objective. More global cues of the
image could be captured by Transformer. In addition, simultaneously learning
the object segmentation simulates the human visual perception, which we would
verify in our investigation of human gaze control in cognitive science. We
build an extra decoder for the subtask and the multiple tasks share the same
Transformer encoder, forcing it to learn from multiple feature spaces. We find
in practice simply adding the subtask might confuse the main task learning,
hence Multi-task Attention Module is proposed to deal with the feature
interaction between the multiple learning targets. Our method achieves
competitive performance compared to other state-of-the-art methods
Neural Architecture Search: Insights from 1000 Papers
In the past decade, advances in deep learning have resulted in breakthroughs
in a variety of areas, including computer vision, natural language
understanding, speech recognition, and reinforcement learning. Specialized,
high-performing neural architectures are crucial to the success of deep
learning in these areas. Neural architecture search (NAS), the process of
automating the design of neural architectures for a given task, is an
inevitable next step in automating machine learning and has already outpaced
the best human-designed architectures on many tasks. In the past few years,
research in NAS has been progressing rapidly, with over 1000 papers released
since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized
and comprehensive guide to neural architecture search. We give a taxonomy of
search spaces, algorithms, and speedup techniques, and we discuss resources
such as benchmarks, best practices, other surveys, and open-source libraries
On the Robustness of ChatGPT: An Adversarial and Out-of-distribution Perspective
ChatGPT is a recent chatbot service released by OpenAI and is receiving
increasing attention over the past few months. While evaluations of various
aspects of ChatGPT have been done, its robustness, i.e., the performance to
unexpected inputs, is still unclear to the public. Robustness is of particular
concern in responsible AI, especially for safety-critical applications. In this
paper, we conduct a thorough evaluation of the robustness of ChatGPT from the
adversarial and out-of-distribution (OOD) perspective. To do so, we employ the
AdvGLUE and ANLI benchmarks to assess adversarial robustness and the Flipkart
review and DDXPlus medical diagnosis datasets for OOD evaluation. We select
several popular foundation models as baselines. Results show that ChatGPT shows
consistent advantages on most adversarial and OOD classification and
translation tasks. However, the absolute performance is far from perfection,
which suggests that adversarial and OOD robustness remains a significant threat
to foundation models. Moreover, ChatGPT shows astounding performance in
understanding dialogue-related texts and we find that it tends to provide
informal suggestions for medical tasks instead of definitive answers. Finally,
we present in-depth discussions of possible research directions.Comment: Technical report; code is at:
https://github.com/microsoft/robustlear
Hi4D: 4D Instance Segmentation of Close Human Interaction
We propose Hi4D, a method and dataset for the automatic analysis of
physically close human-human interaction under prolonged contact. Robustly
disentangling several in-contact subjects is a challenging task due to
occlusions and complex shapes. Hence, existing multi-view systems typically
fuse 3D surfaces of close subjects into a single, connected mesh. To address
this issue we leverage i) individually fitted neural implicit avatars; ii) an
alternating optimization scheme that refines pose and surface through periods
of close proximity; and iii) thus segment the fused raw scans into individual
instances. From these instances we compile Hi4D dataset of 4D textured scans of
20 subject pairs, 100 sequences, and a total of more than 11K frames. Hi4D
contains rich interaction-centric annotations in 2D and 3D alongside accurately
registered parametric body models. We define varied human pose and shape
estimation tasks on this dataset and provide results from state-of-the-art
methods on these benchmarks.Comment: Project page: https://yifeiyin04.github.io/Hi4D
Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes
Humans have long been recorded in a variety of forms since antiquity. For
example, sculptures and paintings were the primary media for depicting human
beings before the invention of cameras. However, most current human-centric
computer vision tasks like human pose estimation and human image generation
focus exclusively on natural images in the real world. Artificial humans, such
as those in sculptures, paintings, and cartoons, are commonly neglected, making
existing models fail in these scenarios. As an abstraction of life, art
incorporates humans in both natural and artificial scenes. We take advantage of
it and introduce the Human-Art dataset to bridge related tasks in natural and
artificial scenarios. Specifically, Human-Art contains 50k high-quality images
with over 123k person instances from 5 natural and 15 artificial scenarios,
which are annotated with bounding boxes, keypoints, self-contact points, and
text information for humans represented in both 2D and 3D. It is, therefore,
comprehensive and versatile for various downstream tasks. We also provide a
rich set of baseline results and detailed analyses for related tasks, including
human detection, 2D and 3D human pose estimation, image generation, and motion
transfer. As a challenging dataset, we hope Human-Art can provide insights for
relevant research and open up new research questions.Comment: CVPR202
- …