2,533 research outputs found
Towards a user-centric and multidisciplinary framework for designing context-aware applications
Research into context-aware computing has not sufficiently addressed human and social aspects of design. Existing design frameworks are predominantly software orientated, make little use of cross-disciplinary work, and do not provide an easily transferable structure for cross-application of design principles. To address these problems, this paper proposes a multidisciplinary and user-centred design framework, and two models of context, which derive from conceptualisations within Psychology, Linguistics, and Computer Science. In a study, our framework was found to significantly improve the performance of postgraduate students at identifying the context of the user and application, and the usability issues that arise
Intelligibility and user control of context-aware application behaviours
Context-aware applications adapt their behaviours according to changes in user context and user requirements. Research and experience have shown that such applications will not always behave the way as users expect. This may lead to loss of users' trust and acceptance of these systems. Hence, context-aware applications should (1) be intelligible (e.g., able to explain to users why it decided to behave in a certain way), and (2) allow users to exploit the revealed information and apply appropriate feedback to control the application behaviours according to their individual preferences to achieve a more desirable outcome. Without appropriate mechanisms for explanations and control of application adaptations, the usability of the applications is limited. This paper describes our on going research and development of a conceptual framework that supports intelligibility of model based context-aware applications and user control of their adaptive behaviours. The goal is to improve usability of context-aware applications
Context Aware Computing for The Internet of Things: A Survey
As we are moving towards the Internet of Things (IoT), the number of sensors
deployed around the world is growing at a rapid pace. Market research has shown
a significant growth of sensor deployments over the past decade and has
predicted a significant increment of the growth rate in the future. These
sensors continuously generate enormous amounts of data. However, in order to
add value to raw sensor data we need to understand it. Collection, modelling,
reasoning, and distribution of context in relation to sensor data plays
critical role in this challenge. Context-aware computing has proven to be
successful in understanding sensor data. In this paper, we survey context
awareness from an IoT perspective. We present the necessary background by
introducing the IoT paradigm and context-aware fundamentals at the beginning.
Then we provide an in-depth analysis of context life cycle. We evaluate a
subset of projects (50) which represent the majority of research and commercial
solutions proposed in the field of context-aware computing conducted over the
last decade (2001-2011) based on our own taxonomy. Finally, based on our
evaluation, we highlight the lessons to be learnt from the past and some
possible directions for future research. The survey addresses a broad range of
techniques, methods, models, functionalities, systems, applications, and
middleware solutions related to context awareness and IoT. Our goal is not only
to analyse, compare and consolidate past research work but also to appreciate
their findings and discuss their applicability towards the IoT.Comment: IEEE Communications Surveys & Tutorials Journal, 201
I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance
We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient
End user programming of awareness systems : addressing cognitive and social challenges for interaction with aware environments
The thesis is put forward that social intelligence in awareness systems emerges from end-Users themselves through the mechanisms that support them in the development and maintenance of such systems. For this intelligence to emerge three challenges have to be addressed, namely the challenge of appropriate awareness abstractions, the challenge of supportive interactive tools, and the challenge of infrastructure. The thesis argues that in order to advance towards social intelligent awareness systems, we should be able to interpret and predict the success or failure of such systems in relationship to their communicational objectives and their implications for the social interactions they support. The FN-AAR (Focus-Nimbus Aspects Attributes Resources) model is introduced as a formal model which by capturing the general characteristics of the awareness-systems domain allows predictions about socially salient patterns pertaining to human communication and brings clarity to the discussion around relevant concepts such as social translucency, symmetry, and deception. The thesis recognizes that harnessing the benefits of context awareness can be problematic for end-users and other affected individuals, who may not always be able to anticipate, understand or appreciate system function, and who may so feel their own sense of autonomy and privacy threatened. It introduces a set of tools and mechanisms that support end-user control, system intelligibility and accountability. This is achieved by minimizing the cognitive effort needed to handle the increased complexity of such systems and by enhancing the ability of people to configure and maintain intelligent environments. We show how these tools and mechanisms empower end-users to answer questions such as "how does the system behave", "why is something happening", "how would the system behave in response to a change in context", and "how can the system’s behaviour be altered" to achieve intelligibility, accountability, and end-user control. Finally, the thesis argues that awareness applications overall can not be examined as static configurations of services and functions, and that they should be seen as the results of both implicit and explicit interaction with the user. Amelie is introduced as a supportive framework for the development of context-aware applications that encourages the design of the interactive mechanisms through which end-users can control, direct and advance such systems dynamically throughout their deployment. Following the recombinant computing approach, Amelie addresses the implications of infrastructure design decisions on user experience, while by adopting the premises of the FN-AAR model Amelie supports the direct implementation of systems that allow end-users to meet social needs and to practice extant social skills
Recommended from our members
ExSS 2018: Workshop on explainable smart systems
Smart systems that apply complex reasoning to make decisions and plan behavior are often difficult for users to understand. While research to make systems more explainable and therefore more intelligible and transparent is gaining pace, there are numerous issues and problems regarding these systems that demand further attention. The goal of this workshop is to bring academia and industry together to address these issues. The workshop includes a keynote, poster panels, and group activities, towards developing concrete approaches to handling challenges related to the design, development, and evaluation of explainable smart systems
Recommended from our members
End-user interactions with intelligent and autonomous systems.
Systems that learn from or personalize themselves to users are quickly becoming mainstream yet interaction with these systems is limited and often uninformative for the end user. This workshop focuses on approaches and challenges to explore making these systems transparent, controllable and ultimately trustworthy to end users. The aims of the workshop are to help establish connections among researchers and industrial practitioners using real-world problems as catalysts to facilitate the exchange of approaches, solutions, and ideas about how to better support end users
- …