21 research outputs found
Fine-Grained Car Detection for Visual Census Estimation
Targeted socioeconomic policies require an accurate understanding of a
country's demographic makeup. To that end, the United States spends more than 1
billion dollars a year gathering census data such as race, gender, education,
occupation and unemployment rates. Compared to the traditional method of
collecting surveys across many years which is costly and labor intensive,
data-driven, machine learning driven approaches are cheaper and faster--with
the potential ability to detect trends in close to real time. In this work, we
leverage the ubiquity of Google Street View images and develop a computer
vision pipeline to predict income, per capita carbon emission, crime rates and
other city attributes from a single source of publicly available visual data.
We first detect cars in 50 million images across 200 of the largest US cities
and train a model to predict demographic attributes using the detected cars. To
facilitate our work, we have collected the largest and most challenging
fine-grained dataset reported to date consisting of over 2600 classes of cars
comprised of images from Google Street View and other web sources, classified
by car experts to account for even the most subtle of visual differences. We
use this data to construct the largest scale fine-grained detection system
reported to date. Our prediction results correlate well with ground truth
income data (r=0.82), Massachusetts department of vehicle registration, and
sources investigating crime rates, income segregation, per capita carbon
emission, and other market research. Finally, we learn interesting
relationships between cars and neighborhoods allowing us to perform the first
large scale sociological analysis of cities using computer vision techniques.Comment: AAAI 201
Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US
The United States spends more than $1B each year on initiatives such as the
American Community Survey (ACS), a labor-intensive door-to-door study that
measures statistics relating to race, gender, education, occupation,
unemployment, and other demographic factors. Although a comprehensive source of
data, the lag between demographic changes and their appearance in the ACS can
exceed half a decade. As digital imagery becomes ubiquitous and machine vision
techniques improve, automated data analysis may provide a cheaper and faster
alternative. Here, we present a method that determines socioeconomic trends
from 50 million images of street scenes, gathered in 200 American cities by
Google Street View cars. Using deep learning-based computer vision techniques,
we determined the make, model, and year of all motor vehicles encountered in
particular neighborhoods. Data from this census of motor vehicles, which
enumerated 22M automobiles in total (8% of all automobiles in the US), was used
to accurately estimate income, race, education, and voting patterns, with
single-precinct resolution. (The average US precinct contains approximately
1000 people.) The resulting associations are surprisingly simple and powerful.
For instance, if the number of sedans encountered during a 15-minute drive
through a city is higher than the number of pickup trucks, the city is likely
to vote for a Democrat during the next Presidential election (88% chance);
otherwise, it is likely to vote Republican (82%). Our results suggest that
automated systems for monitoring demographic trends may effectively complement
labor-intensive approaches, with the potential to detect trends with fine
spatial resolution, in close to real time.Comment: 41 pages including supplementary material. Under review at PNA
Model Cards for Model Reporting
Trained machine learning models are increasingly used to perform high-impact
tasks in areas such as law enforcement, medicine, education, and employment. In
order to clarify the intended use cases of machine learning models and minimize
their usage in contexts for which they are not well suited, we recommend that
released models be accompanied by documentation detailing their performance
characteristics. In this paper, we propose a framework that we call model
cards, to encourage such transparent model reporting. Model cards are short
documents accompanying trained machine learning models that provide benchmarked
evaluation in a variety of conditions, such as across different cultural,
demographic, or phenotypic groups (e.g., race, geographic location, sex,
Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex
and Fitzpatrick skin type) that are relevant to the intended application
domains. Model cards also disclose the context in which models are intended to
be used, details of the performance evaluation procedures, and other relevant
information. While we focus primarily on human-centered machine learning models
in the application fields of computer vision and natural language processing,
this framework can be used to document any trained machine learning model. To
solidify the concept, we provide cards for two supervised models: One trained
to detect smiling faces in images, and one trained to detect toxic comments in
text. We propose model cards as a step towards the responsible democratization
of machine learning and related AI technology, increasing transparency into how
well AI technology works. We hope this work encourages those releasing trained
machine learning models to accompany model releases with similar detailed
evaluation numbers and other relevant documentation
Whose Side are Ethics Codes On? Power, Responsibility and the Social Good
The moral authority of ethics codes stems from an assumption that they serve
a unified society, yet this ignores the political aspects of any shared
resource. The sociologist Howard S. Becker challenged researchers to clarify
their power and responsibility in the classic essay: Whose Side Are We On.
Building on Becker's hierarchy of credibility, we report on a critical
discourse analysis of data ethics codes and emerging conceptualizations of
beneficence, or the "social good", of data technology. The analysis revealed
that ethics codes from corporations and professional associations conflated
consumers with society and were largely silent on agency. Interviews with
community organizers about social change in the digital era supplement the
analysis, surfacing the limits of technical solutions to concerns of
marginalized communities. Given evidence that highlights the gulf between the
documents and lived experiences, we argue that ethics codes that elevate
consumers may simultaneously subordinate the needs of vulnerable populations.
Understanding contested digital resources is central to the emerging field of
public interest technology. We introduce the concept of digital differential
vulnerability to explain disproportionate exposures to harm within data
technology and suggest recommendations for future ethics codes.Comment: Conference on Fairness, Accountability, and Transparency (FAT* '20),
January 27-30, 2020, Barcelona, Spain. Correcte