37 research outputs found
Applied deep learning and data science with a human-centric and data-centric approach
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL.
The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable.
Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from âbetter modelsâ towards better, and smaller, data. He defined his approach Data-Centric AI.
Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality.
This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective
Enhancing Free-text Interactions in a Communication Skills Learning Environment
Learning environments frequently use gamification to enhance user interactions.Virtual characters with whom players engage in simulated conversations often employ prescripted dialogues; however, free user inputs enable deeper immersion and higher-order cognition. In our learning environment, experts developed a scripted scenario as a sequence of potential actions, and we explore possibilities for enhancing interactions by enabling users to type free inputs that are matched to the pre-scripted statements using Natural Language Processing techniques. In this paper, we introduce a clustering mechanism that provides recommendations for fine-tuning the pre-scripted answers in order to better match user inputs
Engines of Order
Over the last decades, and in particular since the widespread adoption of the Internet, encounters with algorithmic procedures for âinformation retrievalâ â the activity of getting some piece of information out of a col-lection or repository of some kind â have become everyday experiences for most people in large parts of the world
Recommended from our members
Improving Application Quality using Mobile Analytics
The purpose of this research is to investigate and report on how mobile analytics can help real-world developers improve the quality of their apps efficiently and effectively. The research also considers the effects of mobile analytics in terms of the artefacts developed and maintained by the development team and also researches key characteristics of a range of mobile analytics tools and services.
Research Design: the research takes a developer-oriented perspective of using three complementary sources of data: 1) platform-level analytics, using Android Vitals as the primary analytics tool, 2) in-app analytics with a focus on runtime failures caused by crashes and freezes (known as Application Not Responding (ANR) in Android), and 3) interviews with developers. Action research techniques included roles of embedded developer, guide, and observer across different mobile app projects I was involved in. Hackathons were used to experiment with the speed and ability to find and address issues reported by the analytics tools used by the app developers. Their apps have a combined active user base of over 3,000,000 users. Many of these apps use a mainstream crash analytics library which was used to complement and contrast the results provided in the primary analytics tool. The research is intended to facilitate ease of future research and reproducibility, e.g. by using open-source projects as the code, bug reports, etc. are all published and available. This research was complemented by a) collaborating with professional developers who provided additional examples and results, and b) investigating grey material including grey literature and grey data.
The findings of this research highlights that using mobile analytics helped to reduce failure rates markedly, quickly, and effectively by applying techniques described here. Various limitations and flaws were found in the analytics tools; these provide cause for concern as they may affect the appâs placement in the app store and revenues. These limitations and flaws also make some issues in the apps harder to identify, prioritise, and fix.We identified ways to compensate for many of these and developed open-source software to facilitate additional analysis. Flaws and bugs were reported to the Android Vitals team at Google who acknowledged they would fix several of them. Several bugs were hard to reproduce, partly as Google deliberately hid pertinent details from the data they gather. Nonetheless app developers were able to ameliorate or fix the bugs for some issues even when they were not able to reproduce them.
Android Vitals shows the potential of how the combination of an app store and platform could be used to improve the quality of apps without users needing to actively participate. Some crashes were hard to reproduce and may be impractical to find before the app is released to end users. Developers can determine comparative improvements in their releases, such as whether they fixed a bug, by using Android Vitals and similar analytics tools; i.e. mobile analytics may help teams to determine whether they have improved the quality of their app even with flaws and limitations in the mobile analytics.</i
Data and the city â accessibility and openness. a cybersalon paper on open data
This paper showcases examples of bottomâup open data and smart city applications and identifies lessons for future such efforts. Examples include Changify, a neighbourhood-based platform for residents, businesses, and companies; Open Sensors, which provides APIs to help businesses, startups, and individuals develop applications for the Internet of Things; and Cybersalonâs Hackney Treasures. a location-based mobile app that uses Wikipedia entries geolocated in Hackney borough to map notable local residents. Other experiments with sensors and open data by Cybersalon members include Ilze Black and Nanda Khaorapapong's The Breather, a "breathing" balloon that uses high-end, sophisticated sensors to make air quality visible; and James Moulding's AirPublic, which measures pollution levels. Based on Cybersalon's experience to date, getting data to the people is difficult, circuitous, and slow, requiring an intricate process of leadership, public relations, and perseverance. Although there are myriad tools and initiatives, there is no one solution for the actual transfer of that data