10,248 research outputs found
Search Process as Transitions Between Neural States
Search is one of the most performed activities on the World Wide
Web. Various conceptual models postulate that the search process
can be broken down into distinct emotional and cognitive states
of searchers while they engage in a search process. These models
significantly contribute to our understanding of the search process.
However, they are typically based on self-report measures, such as
surveys, questionnaire, etc. and therefore, only indirectly monitor
the brain activity that supports such a process. With this work,
we take one step further and directly measure the brain activity
involved in a search process. To do so, we break down a search
process into five time periods: a realisation of Information Need,
Query Formulation, Query Submission, Relevance Judgment and
Satisfaction Judgment. We then investigate the brain activity between
these time periods. Using functional Magnetic Resonance
Imaging (fMRI), we monitored the brain activity of twenty-four participants
during a search process that involved answering questions
carefully selected from the TREC-8 and TREC 2001 Q/A Tracks.
This novel analysis that focuses on transitions rather than states
reveals the contrasting brain activity between time periods – which
enables the identification of the distinct parts of the search process
as the user moves through them. This work, therefore, provides an
important first step in representing the search process based on the
transitions between neural states. Discovering more precisely how
brain activity relates to different parts of the search process will
enable the development of brain-computer interactions that better
support search and search interactions, which we believe our study
and conclusions advance
Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges
Today's mobile phones are far from mere communication devices they were ten
years ago. Equipped with sophisticated sensors and advanced computing hardware,
phones can be used to infer users' location, activity, social setting and more.
As devices become increasingly intelligent, their capabilities evolve beyond
inferring context to predicting it, and then reasoning and acting upon the
predicted context. This article provides an overview of the current state of
the art in mobile sensing and context prediction paving the way for
full-fledged anticipatory mobile computing. We present a survey of phenomena
that mobile phones can infer and predict, and offer a description of machine
learning techniques used for such predictions. We then discuss proactive
decision making and decision delivery via the user-device feedback loop.
Finally, we discuss the challenges and opportunities of anticipatory mobile
computing.Comment: 29 pages, 5 figure
Context-aware QoS provisioning for an M-health service platform
Inevitably, healthcare goes mobile. Recently developed mobile healthcare (i.e., m-health) services allow healthcare professionals to monitor mobile patient's vital signs and provide feedback to this patient anywhere at any time. Due to the nature of current supporting mobile service platforms, m-health services are delivered with a best-effort, i.e., there are no guarantees on the delivered Quality of Service (QoS). In this paper, we argue that the use of context information in an m-health service platform improves the delivered QoS. We give a first attempt to merge context information with a QoS-aware mobile service platform in the m-health services domain. We illustrate this with an epilepsy tele-monitoring scenario
Affective games:a multimodal classification system
Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
Spoken language processing: piecing together the puzzle
Attempting to understand the fundamental mechanisms underlying spoken language processing, whether it is viewed as behaviour exhibited by human beings or as a faculty simulated by machines, is one of the greatest scientific challenges of our age. Despite tremendous achievements over the past 50 or so years, there is still a long way to go before we reach a comprehensive explanation of human spoken language behaviour and can create a technology with performance approaching or exceeding that of a human being. It is argued that progress is hampered by the fragmentation of the field across many different disciplines, coupled with a failure to create an integrated view of the fundamental mechanisms that underpin one organism's ability to communicate with another. This paper weaves together accounts from a wide variety of different disciplines concerned with the behaviour of living systems - many of them outside the normal realms of spoken language - and compiles them into a new model: PRESENCE (PREdictive SENsorimotor Control and Emulation). It is hoped that the results of this research will provide a sufficient glimpse into the future to give breath to a new generation of research into spoken language processing by mind or machine. (c) 2007 Elsevier B.V. All rights reserved
- …