9 research outputs found
"...when you’re a Stranger": Evaluating Safety Perceptions of (un)familiar Urban Places
What makes us feel safe when walking around our cities? Previous
research has shown that our perception of safety strongly depends
on characteristics of the built environment; separately, research has
also shown that safety perceptions depend on the people we encounter
on the streets. However, it is not clear how the two relate
to one another. In this paper, we propose a quantitative method
to investigate this relationship. Using an online crowd–sourcing
approach, we collected 5452 safety ratings from over 500 users
about images showing various combinations of built environment
and people inhabiting it. We applied analysis of covariance (ANCOVA)
to the collected data and found that familiarity of the scene
is the single most important predictor of our sense of safety. Controlling
for familiarity, we identified then what features of the urban
environment increase or decrease our safety perception
Gender Representation on Journal Editorial Boards in the Mathematical Sciences
We study gender representation on the editorial boards of 435 journals in the
mathematical sciences. Women are known to comprise approximately 15% of
tenure-stream faculty positions in doctoral-granting mathematical sciences
departments in the United States. Compared to this pool, the likely source of
journal editorships, we find that 8.9% of the 13067 editorships in our study
are held by women. We describe group variations within the editorships by
identifying specific journals, subfields, publishers, and countries that
significantly exceed or fall short of this average. To enable our study, we
develop a semi-automated method for inferring gender that has an estimated
accuracy of 97.5%. Our findings provide the first measure of gender
distribution on editorial boards in the mathematical sciences, offer insights
that suggest future studies in the mathematical sciences, and introduce new
methods that enable large-scale studies of gender distribution in other fields.Comment: 21 pages, 10 figure
A Labeling Task Design for Supporting Algorithmic Needs: Facilitating Worker Diversity and Reducing AI Bias
Studies on supervised machine learning (ML) recommend involving workers from
various backgrounds in training dataset labeling to reduce algorithmic bias.
Moreover, sophisticated tasks for categorizing objects in images are necessary
to improve ML performance, further complicating micro-tasks. This study aims to
develop a task design incorporating the fair participation of people,
regardless of their specific backgrounds or task's difficulty. By collaborating
with 75 labelers from diverse backgrounds for 3 months, we analyzed workers'
log-data and relevant narratives to identify the task's hurdles and helpers.
The findings revealed that workers' decision-making tendencies varied depending
on their backgrounds. We found that the community that positively helps workers
and the machine's feedback perceived by workers could make people easily
engaged in works. Hence, ML's bias could be expectedly mitigated. Based on
these findings, we suggest an extended human-in-the-loop approach that connects
labelers, machines, and communities rather than isolating individual workers.Comment: 45 pages, 4 figure
Tools and methods in participatory modeling: Selecting the right tool for the job
© 2018 Elsevier Ltd Various tools and methods are used in participatory modelling, at different stages of the process and for different purposes. The diversity of tools and methods can create challenges for stakeholders and modelers when selecting the ones most appropriate for their projects. We offer a systematic overview, assessment, and categorization of methods to assist modelers and stakeholders with their choices and decisions. Most available literature provides little justification or information on the reasons for the use of particular methods or tools in a given study. In most of the cases, it seems that the prior experience and skills of the modelers had a dominant effect on the selection of the methods used. While we have not found any real evidence of this approach being wrong, we do think that putting more thought into the method selection process and choosing the most appropriate method for the project can produce better results. Based on expert opinion and a survey of modelers engaged in participatory processes, we offer practical guidelines to improve decisions about method selection at different stages of the participatory modeling process
A Quantitative Approach to Evaluate and Develop Theories on (Fear of) Crime in Urban Environments
Well established work in criminological, architectural and urban studies suggests that there is a strong correlation between crime, perceived safety, the fear of crime, and the presence of different demographics, the people dynamics, in an urban environment. These studies have been conducted primarily using qualitative evaluation methods, and are typically limited in terms of the geographical area they cover, the number of respondents they reach out to, and the temporal frequency with which they can be repeated. As cities are rapidly growing and evolving complex entities, complementary approaches that afford social and urban scientists the ability to evaluate urban crime and fear of crime theories at scale are required. In this thesis, I propose a combination of methodologies following a data mining and crowdsourcing approach to quantitatively validate these theories at scale, and to support the exploration of new ones. To relate people dynamics to crime quantitatively, I first analyse footfall counts as recorded by telecommunication data, and extract metrics that act as proxies of urban crime theories. Using correlation and regression analysis between such proxies and crime activity derived from open crime data records, the method can help to understand to what extent different theories of urban crime hold, and where. To relate people dynamics to fear of crime quantitatively, I then built two image– based online crowdsourcing platforms to investigate to what extent online crowdsourcing can be used to gather safety perceptions about urban places, defined by the combination of built environment and the people inhabiting it. As existing theories suggest that knowing who the respondents are is crucial for understanding safety perceptions, I also gathered their demographic background information to discuss their perceptions accordingly. I applied analysis of variance (ANOVA) and covariance (ANCOVA) to these data. The method can help to understand what visual properties based on peopl
Constructing and restraining the societies of surveillance: Accountability, from the rise of intelligence services to the expansion of personal data networks in Spain and Brazil (1975-2020)
541 p.The objective of this study is to examine the development of socio-technical accountability mechanisms in order to: a) preserve and increase the autonomy of individuals subjected to surveillance and b) replenish the asymmetry of power between those who watch and those who are watched. To do so, we address two surveillance realms: intelligence services and personal data networks. The cases studied are Spain and Brazil, from the beginning of the political transitions in the 1970s (in the realm of intelligence), and from the expansion of Internet digital networks in the 1990s (in the realm of personal data) to the present time. The examination of accountability, thus, comprises a holistic evolution of institutions, regulations, market strategies, as well as resistance tactics. The conclusion summarizes the accountability mechanisms and proposes universal principles to improve the legitimacy of authority in surveillance and politics in a broad sense
Recommended from our members
Envisioning Identity: The Social Production of Computer Vision
Computer vision technologies have been increasingly scrutinized in recent years for their propensity to cause harm. Computer vision systems designed to interpret visual data about humans for various tasks are perceived as particularly high risk. Broadly, the harms of computer vision focus on demographic biases (favoring one group over another) and categorical injustices (through erasure, stereotyping, or problematic labels). Prior work has focused on both uncovering these harms and mitigating them, through, for example, better dataset collection practices and guidelines for more contextual data labeling. This research has largely focused on understanding discrete computer vision artifacts, such as datasets or model outputs, and their implications for specific identity groups or for privacy. There is opportunity to further understand how human identity is embedded into computer vision not only across these artifacts, but also across the network of human workers who shape computer vision systems.
This dissertation focuses on understanding how human identity is conceptualized across two different “layers” of computer vision: (1) at the artifact layer, where the classification ontology is deployed, in the form of datasets and model inputs and outputs; and (2) at the development layer, where social decisions are made about how to implement models and annotations by traditional tech workers. Specifically, I examine how identity is represented in artifacts and how those representations are derived from human workers. I demonstrate how human workers rely on their own subjective positionalities—the worldviews they hold as a result of their own identities and experiences.
I present six studies that identify the subjectivity of computer vision. Three studies focus on artifacts, both model outputs and datasets, to discuss how identity is currently implemented and how that implementation is embedded with specific disciplinary values that often clash with more sociocultural lenses on identity. The fourth and fifth studies focus on how human workers shape these artifacts. Through interviews with both traditional tech workers (like engineers and data scientists) and contingent data workers (who apply requirements given to them by traditional tech workers), I uncover how the positionality of human actors shapes identity in computer vision. Finally, in the sixth study, I examine how power operates between these two types of workers, traditional tech workers and data workers. Identity, as a concept, is treated as an infrastructure for which to build products. Workers attempt to uncover some underlying truth about identity and capture it in technical systems. However, in reality, workers reference the nebulous and intangible concept of identity to implement their own positional perspectives. I demonstrate that traditional tech workers have a positional power in the development of identity in computer vision; traditional worker positionalities are viewed as expert perspectives to be solidified into artifacts. Meanwhile, data worker positionalities are viewed as risks to the quality and trustworthiness of those artifacts. Thus, traditional tech workers attempt to control data worker positionalities, instilling in data workers their own positional perspectives.
By synthesizing insights from these six studies, this dissertation contributes a theory on identity in developing technical artifacts. I argue that identity concepts in the process of computer vision development move from open—filled with nuance, complexity, history, and opportunity—to closed—narrowly defined and embedded into artifacts that are deployed to reify a specific worldview of identity. I describe how workers pull from the intangible meta-concept of “Identity” to shape, through the process of development, specific Attributes to embed into technologies. I show how workers transform these Attributes through the development process into narrower and narrower definitions. These definitions of identity thus become Technical Attributes, highly specific implementations of identity which are no longer malleable to different perspectives.</p