3 research outputs found
Queer In AI: A Case Study in Community-Led Participatory AI
We present Queer in AI as a case study for community-led participatory design
in AI. We examine how participatory design and intersectional tenets started
and shaped this community's programs over the years. We discuss different
challenges that emerged in the process, look at ways this organization has
fallen short of operationalizing participatory and intersectional principles,
and then assess the organization's impact. Queer in AI provides important
lessons and insights for practitioners and theorists of participatory methods
broadly through its rejection of hierarchy in favor of decentralization,
success at building aid and programs by and for the queer community, and effort
to change actors and institutions outside of the queer community. Finally, we
theorize how communities like Queer in AI contribute to the participatory
design in AI more broadly by fostering cultures of participation in AI,
welcoming and empowering marginalized participants, critiquing poor or
exploitative participatory practices, and bringing participation to
institutions outside of individual research projects. Queer in AI's work serves
as a case study of grassroots activism and participatory methods within AI,
demonstrating the potential of community-led participatory methods and
intersectional praxis, while also providing challenges, case studies, and
nuanced insights to researchers developing and using participatory methods.Comment: To appear at FAccT 202
Structured Object-Aware Physics Prediction for Video Modeling and Planning
When humans observe a physical system, they can easily locate objects,
understand their interactions, and anticipate future behavior, even in settings
with complicated and previously unseen interactions. For computers, however,
learning such models from videos in an unsupervised fashion is an unsolved
research problem. In this paper, we present STOVE, a novel state-space model
for videos, which explicitly reasons about objects and their positions,
velocities, and interactions. It is constructed by combining an image model and
a dynamics model in compositional manner and improves on previous work by
reusing the dynamics model for inference, accelerating and regularizing
training. STOVE predicts videos with convincing physical behavior over hundreds
of timesteps, outperforms previous unsupervised models, and even approaches the
performance of supervised baselines. We further demonstrate the strength of our
model as a simulator for sample efficient model-based control in a task with
heavily interacting objects.Comment: Published as a conference paper at 2020 International Conference for
Learning Representation