1,962 research outputs found
Annual Report 2019-2020
LETTER FROM THE DEAN
As I write this letter wrapping up the 2019-20 academic year, we remain in a global pandemic that has profoundly altered our lives. While many things have changed, some stayed the same: our CDM community worked hard, showed up for one another, and continued to advance their respective fields. A year that began like many others changed swiftly on March 11th when the University announced that spring classes would run remotely. By March 28th, the first day of spring quarter, we had moved 500 CDM courses online thanks to the diligent work of our faculty, staff, and instructional designers. But CDM’s work went beyond the (virtual) classroom. We mobilized our makerspaces to assist in the production of personal protective equipment for Illinois healthcare workers, participated in COVID-19 research initiatives, and were inspired by the innovative ways our student groups learned to network. You can read more about our response to the COVID-19 pandemic on pgs. 17-19. Throughout the year, our students were nationally recognized for their skills and creative work while our faculty were published dozens of times and screened their films at prestigious film festivals. We added a new undergraduate Industrial Design program, opened a second makerspace on the Lincoln Park Campus, and created new opportunities for Chicago youth. I am pleased to share with you the College of Computing and Digital Media’s (CDM) 2019-20 annual report, highlighting our collective accomplishments.
David MillerDeanhttps://via.library.depaul.edu/cdmannual/1003/thumbnail.jp
Sensitive Research, Practice, and Design in HCI
New research areas in HCI examine complex and sensitive research areas, such as crisis, life transitions, and mental health. Further, research in complex topics such as harassment and graphic content can leave researchers vulnerable to emotional and physical harm. There is a need to bring researchers together to discuss challenges across sensitive research spaces and environments. We propose a workshop to explore the methodological, ethical, and emotional challenges of sensitive research in HCI. We will actively recruit from diverse research environments (industry, academia, government, etc.) and methods areas (qualitative, quantitative, design practices, etc.) and identify commonalities in and encourage relationship-building between these areas. This one-day workshop will be led by academic and industry researchers with diverse methods, topical, and employment experiences
AI auditing: The Broken Bus on the Road to AI Accountability
One of the most concrete measures to take towards meaningful AI
accountability is to consequentially assess and report the systems' performance
and impact. However, the practical nature of the "AI audit" ecosystem is
muddled and imprecise, making it difficult to work through various concepts and
map out the stakeholders involved in the practice. First, we taxonomize current
AI audit practices as completed by regulators, law firms, civil society,
journalism, academia, consulting agencies. Next, we assess the impact of audits
done by stakeholders within each domain. We find that only a subset of AI audit
studies translate to desired accountability outcomes. We thus assess and
isolate practices necessary for effective AI audit results, articulating the
observed connections between AI audit design, methodology and institutional
context on its effectiveness as a meaningful mechanism for accountability.Comment: To appear in the proceedings of the 2nd IEEE Conference on Secure and
Trustworthy Machine Learning (SaTML) 202
SHELDON Smart habitat for the elderly.
An insightful document concerning active and assisted living under different perspectives: Furniture and habitat, ICT solutions and Healthcare
"Alexa, Can I Program You?": Student Perceptions of Conversational Artificial Intelligence Before and After Programming Alexa
Growing up in an artificial intelligence-filled world, with Siri and Amazon
Alexa often within arm's - or speech's - reach, could have significant impact
on children. Conversational agents could influence how students
anthropomorphize computer systems or develop a theory of mind. Previous
research has explored how conversational agents are used and perceived by
children within and outside of learning contexts. This study investigates how
middle and high school students' perceptions of Alexa change through
programming their own conversational agents in week-long AI education
workshops. Specifically, we investigate the workshops' influence on student
perceptions of Alexa's intelligence, friendliness, aliveness, safeness,
trustworthiness, human-likeness, and feelings of closeness. We found that
students felt Alexa was more intelligent and felt closer to Alexa after the
workshops. We also found strong correlations between students' perceptions of
Alexa's friendliness and trustworthiness, and safeness and trustworthiness.
Finally, we explored how students tended to more frequently use computer
science-related diction and ideas after the workshops. Based on our findings,
we recommend designers carefully consider personification, transparency,
playfulness and utility when designing CAs for learning contexts.Comment: 16 pages, 6 figure
- …