5,654 research outputs found
A Framework for Exploring and Evaluating Mechanics in Human Computation Games
Human computation games (HCGs) are a crowdsourcing approach to solving
computationally-intractable tasks using games. In this paper, we describe the
need for generalizable HCG design knowledge that accommodates the needs of both
players and tasks. We propose a formal representation of the mechanics in HCGs,
providing a structural breakdown to visualize, compare, and explore the space
of HCG mechanics. We present a methodology based on small-scale design
experiments using fixed tasks while varying game elements to observe effects on
both the player experience and the human computation task completion. Finally
we discuss applications of our framework using comparisons of prior HCGs and
recent design experiments. Ultimately, we wish to enable easier exploration and
development of HCGs, helping these games provide meaningful player experiences
while solving difficult problems.Comment: 11 pages, 5 figure
Early Learning Innovation Fund Evaluation Final Report
This is a formative evaluation of the Hewlett Foundation's Early Learning Innovation Fund that began in 2011 as part of the Quality Education in Developing Countries (QEDC) initiative. The Fund has four overarching objectives, which are to: promote promising approaches to improve children's learning; strengthen the capacity of organizations implementing those approaches; strengthen those organizations' networks and ownership; and grow 20 percent of implementing organizations into significant players in the education sector. The Fund's original design was to create a "pipeline" of innovative approaches to improve learning outcomes, with the assumption that donors and partners would adopt the most successful ones. A defining feature of the Fund was that it delivered assistance through two intermediary support organizations (ISOs), rather than providing funds directly to implementing organizations. Through an open solicitation process, the Hewlett Foundation selected Firelight Foundation and TrustAfrica to manage the Fund. Firelight Foundation, based in California, was founded in 1999 with a mission to channel resources to community-based organizations (CBOs) working to improve the lives of vulnerable children and families in Africa. It supports 12 implementing organizations in Tanzania for the Fund. TrustAfrica, based in Dakar, Senegal, is a convener that seeks to strengthen African-led initiatives addressing some of the continent's most difficult challenges. The Fund was its first experience working specifically with early learning and childhood development organizations. Under the Fund, it supported 16 such organizations: one in Mali and five each in Senegal, Uganda and Kenya. At the end of 2014, the Hewlett Foundation commissioned Management Systems International (MSI) to conduct a mid-term evaluation assessing the implementation of the Fund exploring the extent to which it achieved intended outcomes and any factors that had limited or enabled its achievements. It analyzed the support that the ISOs provided to their implementing organizations, with specific focus on monitoring and evaluation (M&E). The evaluation included an audit of the implementing organizations' M&E systems and a review of the feasibility of compiling data collected to support an impact evaluation. Finally, the Foundation and the ISOs hoped that this evaluation would reveal the most promising innovations and inform planning for Phase II of the Fund. The evaluation findings sought to inform the Hewlett Foundation and other donors interested in supporting intermediary grant-makers, early learning innovations and the expansion of innovations. TrustAfrica and Firelight Foundation provided input to the evaluation's scope of work. Mid-term evaluation reports for each ISO provided findings about their management of the Fund's Phase I and recommendations for Phase II. This final evaluation report will inform donors, ISOs and other implementing organizations about the best approaches to support promising early learning innovations and their expansion. The full report outlines findings common across both ISOs' experience and includes recommendations in four key areas: adequate time; appropriate capacity building; advocacy and scaling up; and evaluating and documenting innovations. Overall, both Firelight Foundation and TrustAfrica supported a number of effective innovations working through committed and largely competent implementing organizations. The program's open-ended nature avoided being prescriptive in its approach, but based on the lessons learned in this evaluation and the broader literature, the Hewlett Foundation and other donors could have offered more guidance to ISOs to avoid the need to continually relearn some lessons. For example, over the evaluation period, it became increasingly evident that the current context demands more focused advance planning to measure impact on beneficiaries and other stakeholders and a more concrete approach to promoting and resourcing potential scale-up. The main findings from the evaluation and recommendations are summarized here
System Learning of User Interactions
The case presented in this paper describes an early prototype and next steps for developing a user-adaptive recommender system using semantic analysis and matching of user profiles and content. Machine learning methods optimize semantic analysis and matching based on implicit and explicit feedback of users. The constant interaction with users provides a valuable data source that is used to improve human-computer interaction and for adapting to specific user preferences. This can lead to, among others, higher accuracy and relevance in content matching, more intuitive graphical user interfaces, improved system performance, and better prioritization of tasks
Target Apps Selection: Towards a Unified Search Framework for Mobile Devices
With the recent growth of conversational systems and intelligent assistants
such as Apple Siri and Google Assistant, mobile devices are becoming even more
pervasive in our lives. As a consequence, users are getting engaged with the
mobile apps and frequently search for an information need in their apps.
However, users cannot search within their apps through their intelligent
assistants. This requires a unified mobile search framework that identifies the
target app(s) for the user's query, submits the query to the app(s), and
presents the results to the user. In this paper, we take the first step forward
towards developing unified mobile search. In more detail, we introduce and
study the task of target apps selection, which has various potential real-world
applications. To this aim, we analyze attributes of search queries as well as
user behaviors, while searching with different mobile apps. The analyses are
done based on thousands of queries that we collected through crowdsourcing. We
finally study the performance of state-of-the-art retrieval models for this
task and propose two simple yet effective neural models that significantly
outperform the baselines. Our neural approaches are based on learning
high-dimensional representations for mobile apps. Our analyses and experiments
suggest specific future directions in this research area.Comment: To appear at SIGIR 201
Attendee-Sourcing: Exploring The Design Space of Community-Informed Conference Scheduling
Constructing a good conference schedule for a large multi-track conference
needs to take into account the preferences and constraints of organizers,
authors, and attendees. Creating a schedule which has fewer conflicts for
authors and attendees, and thematically coherent sessions is a challenging
task.
Cobi introduced an alternative approach to conference scheduling by engaging
the community to play an active role in the planning process. The current Cobi
pipeline consists of committee-sourcing and author-sourcing to plan a
conference schedule. We further explore the design space of community-sourcing
by introducing attendee-sourcing -- a process that collects input from
conference attendees and encodes them as preferences and constraints for
creating sessions and schedule. For CHI 2014, a large multi-track conference in
human-computer interaction with more than 3,000 attendees and 1,000 authors, we
collected attendees' preferences by making available all the accepted papers at
the conference on a paper recommendation tool we built called Confer, for a
period of 45 days before announcing the conference program (sessions and
schedule). We compare the preferences marked on Confer with the preferences
collected from Cobi's author-sourcing approach. We show that attendee-sourcing
can provide insights beyond what can be discovered by author-sourcing. For CHI
2014, the results show value in the method and attendees' participation. It
produces data that provides more alternatives in scheduling and complements
data collected from other methods for creating coherent sessions and reducing
conflicts.Comment: HCOMP 201
- …