2,246 research outputs found

    Reports of the AAAI 2019 spring symposium series

    Get PDF
    Applications of machine learning combined with AI algorithms have propelled unprecedented economic disruptions across diverse fields in industry, military, medicine, finance, and others. With the forecast for even larger impacts, the present economic impact of machine learning is estimated in the trillions of dollars. But as autonomous machines become ubiquitous, recent problems have surfaced. Early on, and again in 2018, Judea Pearl warned AI scientists they must "build machines that make sense of what goes on in their environment," a warning still unheeded that may impede future development. For example, self-driving vehicles often rely on sparse data; self-driving cars have already been involved in fatalities, including a pedestrian; and yet machine learning is unable to explain the contexts within which it operates

    The Faculty Notebook, September 2019

    Full text link
    The Faculty Notebook is published periodically by the Office of the Provost at Gettysburg College to bring to the attention of the campus community accomplishments and activities of academic interest. Faculty are encouraged to submit materials for consideration for publication to the Associate Provost for Faculty Development. Copies of this publication are available at the Office of the Provost

    Improving fairness in machine learning systems: What do industry practitioners need?

    Full text link
    The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. A surge of recent work has focused on the development of algorithmic tools to assess and mitigate such unfairness. If these tools are to have a positive impact on industry practice, however, it is crucial that their design be informed by an understanding of real-world needs. Through 35 semi-structured interviews and an anonymous survey of 267 ML practitioners, we conduct the first systematic investigation of commercial product teams' challenges and needs for support in developing fairer ML systems. We identify areas of alignment and disconnect between the challenges faced by industry practitioners and solutions proposed in the fair ML research literature. Based on these findings, we highlight directions for future ML and HCI research that will better address industry practitioners' needs.Comment: To appear in the 2019 ACM CHI Conference on Human Factors in Computing Systems (CHI 2019

    Annual Report 2019-2020

    Get PDF
    LETTER FROM THE DEAN As I write this letter wrapping up the 2019-20 academic year, we remain in a global pandemic that has profoundly altered our lives. While many things have changed, some stayed the same: our CDM community worked hard, showed up for one another, and continued to advance their respective fields. A year that began like many others changed swiftly on March 11th when the University announced that spring classes would run remotely. By March 28th, the first day of spring quarter, we had moved 500 CDM courses online thanks to the diligent work of our faculty, staff, and instructional designers. But CDM’s work went beyond the (virtual) classroom. We mobilized our makerspaces to assist in the production of personal protective equipment for Illinois healthcare workers, participated in COVID-19 research initiatives, and were inspired by the innovative ways our student groups learned to network. You can read more about our response to the COVID-19 pandemic on pgs. 17-19. Throughout the year, our students were nationally recognized for their skills and creative work while our faculty were published dozens of times and screened their films at prestigious film festivals. We added a new undergraduate Industrial Design program, opened a second makerspace on the Lincoln Park Campus, and created new opportunities for Chicago youth. I am pleased to share with you the College of Computing and Digital Media’s (CDM) 2019-20 annual report, highlighting our collective accomplishments. David MillerDeanhttps://via.library.depaul.edu/cdmannual/1003/thumbnail.jp

    Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

    Full text link
    The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM Conference on Web Science, June 30-July 3, 2019, Boston, U
    • …
    corecore