2,241 research outputs found
A Mixed Reality Approach to 3D Interactive Prototyping for Participatory Design of Ambient Intelligence
Ambient Intelligence (AmI in short) is a multi-disciplinary approach aimed at enriching physical environments with a network of distributed devices in order to support humans in achieving their everyday goals. However, in current research and development, AmI is still largely considered within the engineering domain bearing undeveloped relationship with architecture. The fact that architecture design substantially aims to address the requirements of supporting people in carrying out their everyday life activities, tasks and practices with spatial strategies. These are common to the AmI’s objectives and purposes, and we aim at considering the possibilities or even necessities of investigating the potential design approach accessible to an architectural context. For end users, AmI is a new type of service. Designing and evaluating the AmI experience before resources are spent on designing the processes and technology needed to eventually run the service can save large amounts of time and money. Therefore, it is essential to create an environment in which designers can involve real people in trying out the service design proposals as early as possible in the design process. Existing cases related to stakeholder engaged design of AmI have primarily focused on engineering implementation and generally only present final outcome to stakeholders for user evaluation.
Researchers have been able to build AmI prototypes for design communication. However, most of these prototypes are typically built without the involvement of stakeholders and architects in their conceptual design stage. Using concepts solely designed by engineers may not be user centric and even contain safety risks. The key research question of this thesis is: “How can Ambient Intelligence be designed through a participatory process that involves stakeholders and prospective users?" The thesis consists of the following five components:
1) Identification of a novel participatory design process for modelling AmI scenarios;
2) Identification of the requirements to support prototyping of AmI design, resulting in a conceptual framework that both "lowers the floor" (i.e. making it easier for designers to build the AmI prototypes) and "raises the ceiling" (i.e. increasing the ability of stakeholders and end users to participate in the design process deeply);
i
3) Prototyping an experimental Mixed Reality Modelling (MRM in short) platform to facilitate the participatory design of AmI that supports the requirements, design process, and scenarios prototyping;
4) Case study of applying MRM platform to participatory design of a Smart Laser Cutting Workshop(LCW in short) which used to evaluate the proposed MRM based AmI design approach. The result of the research shows that the MRM based participatory design approach is able to support the design of AmI effectively
A knowledge-based approach towards human activity recognition in smart environments
For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human
activity recognition in smart-homes. The ability to perform ADL without assistance from
other people can be considered as a reference for the estimation of the independent living
level of the older person. Conventionally, this has been assessed by health-care domain
experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can
vary based on the person being monitored and the caregiver\u2019s experience. A significant
amount of research work is implicitly or explicitly aimed at augmenting the health-care
domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from
HAR. From a medical perspective, there is a lack of evidence about the technology readiness
level of smart home architectures supporting older persons by recognizing ADL [2]. We
hypothesize that this may be due to a lack of effective collaboration between smart-home
researchers/developers and health-care domain experts, especially when considering HAR.
We foresee an increase in HAR systems being developed in close collaboration with caregivers
and geriatricians to support their qualitative evaluation of ADL with explainable quantitative
outcomes of the HAR systems. This has been a motivation for the work in this thesis. The
recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support
the health and well-being of older people. It can be relevant to home users in general. For
instance, HAR could support digital assistants or companion robots to provide contextually
relevant and proactive support to the home users, whether young adults or old. This has also
been a motivation for the work in this thesis.
Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL,
and (ii) robust HAR that can support digital assistants or companion robots. There is a need
for the development of a HAR framework that at its core is modular and flexible to facilitate
an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible
for the sake of enriched collaboration with health-care domain experts. Furthermore, it
should be scalable, online, and accurate for having robust HAR, which can enable many
smart-home applications. The goal of this thesis is to design and evaluate such a framework.
This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop
networks of ontologies - for knowledge representation and reasoning - that enables smart
homes to perform human activity recognition online. The second contribution is OWLOOP,
an API that supports the development of HAR system architectures based on Arianna+. It
enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented
Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+
using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four
HAR system implementations. The evaluations and results of these HAR systems emphasize
the novelty of Arianna+
Multimodal Shared-Control Interaction for Mobile Robots in AAL Environments
This dissertation investigates the design, development and implementation of cognitively adequate, safe and robust, spatially-related, multimodal interaction between human operators and mobile robots in Ambient Assisted Living environments both from the theoretical and practical perspectives. By focusing on different aspects of the concept Interaction, the essential contribution of this dissertation is divided into three main research packages; namely, Formal Interaction, Spatial Interaction and Multimodal Interaction in AAL. As the principle package, in Formal Interaction, research effort is dedicated to developing a formal language based interaction modelling and management solution process and a unified dialogue modelling approach. This package aims to enable a robust, flexible, and context-sensitive, yet formally controllable and tractable interaction. This type of interaction can be used to support the interaction management of any complex interactive systems, including the ones covered in the other two research packages. In the second research package, Spatial Interaction, a general qualitative spatial knowledge based multi-level conceptual model is developed and proposed. The goal is to support a spatially-related interaction in human-robot collaborative navigation. With a model-based computational framework, the proposed conceptual model has been implemented and integrated into a practical interactive system which has been evaluated by empirical studies. It has been particularly tested with respect to a set of high-level and model-based conceptual strategies for resolving the frequent spatially-related communication problems in human-robot interaction. Last but not least, in Multimodal Interaction in AAL, attention is drawn to design, development and implementation of multimodal interaction for elderly persons. In this elderly-friendly scenario, ageing-related characteristics are carefully considered for an effective and efficient interaction. Moreover, a standard model based empirical framework for evaluating multimodal interaction is provided. This framework was especially applied to evaluate a minutely developed and systematically improved elderly-friendly multimodal interactive system through a series of empirical studies with groups of elderly persons
Design and decision making to improve healthcare infrastructure
This report presents summary and key findings of research projects undertaken within the Health and Care Infrastructure Research and Innovation Centre (HaCIRIC)by Loughborough University. These projects develop new knowledge and theory on how the built environment adds value to the healthcare delivery process and mainly relate to: ‘Theme 3, Innovative Design and onstruction’ undertaken during HaCIRIC Phase 1; and provide an excellent foundation for the work to be undertaken within the Optimising Healthcare Infrastructure Value (OHIV)project during HaCIRIC Phase 2
Collaborative networks in ambient assisted living
Tese de doutoramento em InformáticaCollaborative Work plays an important role in today’s organizations, especially
in areas where decisions must be made. However, any decision that involves a
collective or group of decision makers is, by itself, complex, but is becoming recurrent
in recent years. In this work we present the VirtualECare project, an intelligent multiagent
system able to monitor, interact and serve its customers, in need of care
services. In last year’s there has been a substantially increase on the number of
people needed of intensive care, especially among the elderly, a phenomenon that is
related to population ageing. However, this is becoming not exclusive of the elderly,
as diseases like obesity, diabetes and blood pressure have been increasing among
young adults. This is a new reality that needs to be dealt by the health sector,
particularly by the public one. Given this scenario, the importance of finding new and
cost effective ways for health care delivery are of particular importance, especially
when it is believed that they should not be removed from their natural “habitat”.
Following this line of thinking, the VirtualECare project will be presented, like similar
ones that preceded it. On the other hand, this is a growing interest in combining the
advances in information society ‐ computing, telecommunications and presentation –
in order to create Group Decision Support Systems (GDSSs). Indeed, the new
economy, along with increased competition in today’s complex business
environments, takes the companies to seek complementarities in order to increase
competitiveness and reduce risks. Under these settings, planning takes a major role in
a company life. However, effective planning depends on the generation and analysis
of ideas (innovative or not) and, as a result, the idea generation and management
processes are crucial. In particular if is believed that the use of GDSS in the healthcare
arena will allow professionals to achieve better results in the analysis of one’s
Electronically Clinical Profile (ECP). This achievement is vital, regarding the explosion
of knowledge and skills, together with the need to use limited resources and get the expected outcomes.Hoje em dia, o Trabalho Colaborativo desempenha um papel deveras
importante na maioria das organizações, especialmente em áreas em que decisões
têm de ser tomadas. No entanto, e muito embora comece a ser recorrente, qualquer
decisão que envolva um grupo colectivo de decisores é, por si só, complexa. Nesta
tese apresenta‐se o projecto VirtualECare, um sistema inteligente multi‐agente capaz
de monitorar, interagir e servir os seus utilizadores, com necessidades de cuidados de
saúde. Nos últimos anos têm‐se verificado um aumento substancial no número de
pessoas necessitadas de cuidados intensivos, especialmente entre a população mais
envelhecida, um fenómeno directamente relacionado com o envelhecimento gradual
da população. No entanto, esta é uma problemática que começa a deixar de estar
exclusivamente associada aos idosos, uma vez que, doenças como a obesidade,
diabetes e a pressão arterial têm vindo a aumentar junto dos, assim chamados,
jovens adultos. Esta é uma nova realidade com a qual o sector da saúde necessita de
lidar, especialmente o sector público. Apresentados estes cenários, a importância de
encontrar novas formas, mais eficazes ao nível dos custos, de providenciar cuidados
de saúde, a quem deles necessita, torna‐se ainda mais premente, especialmente
quando acreditamos que estes não devem ser deslocalizados do seu “habitat”
natural. Seguindo esta linha de raciocínio, vamos apresentar o projecto VirtualECare,
bem como similares que o precederam. Recentemente tem‐se vindo a assistir a um
interesse crescente em combinar os avanços na, assim chamada, sociedade da
informação – computação, telecomunicações e apresentação – de forma a se criarem
Sistemas de Apoio à Decisão em Grupo (GDSS). Na realidade, a nova economia,
associada ao elevado crescimento da competitividade do, já de si, complexo mundo
empresarial, provoca a procura, por parte das empresas e/ou instituições, de outras
que as possam complementar para assim se poderem tornar mais competitivas e
reduzir os riscos assumidos. Neste cenário, o planeamento assume um papel da
maior importância na vida de uma empresa. No entanto, um planeamento eficaz
depende da geração e posterior análise de ideias (inovativas ou não) e, como
resultado, o processo de geração e análise de ideias também se torna crucial. O nosso objectivo é aplicar os já apresentados GDSS a uma nova área. É de esperar que o uso
de GDSS na área da prestação de cuidados de saúde irá permitir que os seus
profissionais obtenham melhores e mais imediatos resultados na análise de um
qualquer Processo Clínico Electrónico (ECP), sendo este um factor crucial, tendo em
conta a explosão de conhecimento e técnicas conjugadas com a necessidade de
melhor se utilizar os recursos existentes
State of the art of audio- and video based solutions for AAL
Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach.
This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users.
The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted.
The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
Recommended from our members
Designing Activities for Collaboration at Classroom Scale Using Shared Technology
Although researchers, teachers and policy makers broadly agree on the benefits of collaborative learning, there appears to be less clarity regarding how effective collaboration can be realised at classroom scale.
Research in Computer-Supported Collaborative Learning (CSCL), Human-Computer Interaction (HCI), simulation-based learning and related fields has produced a considerable range of applications that aim to support collaboration in classrooms. Grounded in well-established theories of how humans learn, many such applications have shown promising results within the context of small research studies. However, most of those research-driven applications never matured beyond the prototype stage and few are available today as products that schools can easily use and adopt. Many systems lack flexibility or require too much time, hardware, technical skills or other resources to be effectively implemented. Furthermore, teachers can be overwhelmed by managing large groups of students engaged in complex, computer-supported tasks.
This thesis investigates how forms of whole-classroom activity can be supported by combining shareable technologies with simulation, team play and orchestration. New designs are explored to help large groups engage and discuss at multiple scales (from pairs and small groups to the entire classroom) in ways that effectively include each student and use the teacher's limited resources efficiently. Moreover, this research aims to devise and validate a conceptual framework that can guide future design, orchestration and evaluation of such activities. Three in-situ studies were conducted to address these goals.
The first study involved the design of a climate change simulation to support a professional training course. Iterative design and video analysis resulted in the formulation of the Collaborative Learning Orchestration for Verbal Engagement and Reflection (CLOVER) framework. This framework comprises a suite of conceptual tools and recommendations that aim to help designers and teachers create, orchestrate and evaluate decision-based simulations for whole-classroom use.
Two follow-up studies were conducted to validate the usability and usefulness of CLOVER. One of them aimed to replicate the previous findings in a similar context and resulted in the design of a sustainable, whole-classroom simulation for students to discuss finance decisions. The other used CLOVER to expand an existing desktop application (a~language comprehension task for children) to classroom scale.
In sum, the three studies provide substantial empirical evidence, suggesting that CLOVER-based applications can effectively reconcile learning needs (collaboration) and technological affordances (shareable devices) with the inherent benefits and constraints of teacher-driven, co-located environments. Furthermore, the findings contribute to a better understanding of what it means to design for sustainability in this context
A Distributed Multi-Model Platform to Cosimulate Multi-Energy Systems in Smart Buildings
Nowadays, buildings are responsible for large consumption of energy in our cities. Moreover, buildings can be seen as the smallest entity of urban energy systems. On these premises, in this paper, we present a flexible and distributed co-simulation platform that exploits a multi-modelling approach to simulate and evaluate energy performance in smart buildings. The developed platform exploits the Mosaik co-simulation framework and implements the Functional Mock-up Interface (FMI) standard in order to couple and synchronise heterogeneous simulators and models. The platform combines in a shared simulation environment: i) the thermal performance of the building simulated with EnergyPlus; ii) a heat pump integrated with a PID control strategy modelled in Modelica to satisfy the heating demand of the building; iii) an electrical energy storage system modelled in MATLAB Simulink; and iv) different Python models used to simulate household occupancy, electrical loads, photovoltaic production and smart meters, respectively. The platform guarantees a plug-and-play integration of models and simulators, in which one or more models can be easily replaced without affecting the whole simulation engine. Finally, we present a demonstration example to test the functionalities, capability and usability of the developed platform and discuss future developments of our framework
Inferring Complex Activities for Context-aware Systems within Smart Environments
The rising ageing population worldwide and the prevalence of age-related conditions such as physical fragility, mental impairments and chronic diseases have significantly impacted the quality of life and caused a shortage of health and care services. Over-stretched healthcare providers are leading to a paradigm shift in public healthcare provisioning. Thus, Ambient Assisted Living (AAL) using Smart Homes (SH) technologies has been rigorously investigated to help address the aforementioned problems.
Human Activity Recognition (HAR) is a critical component in AAL systems which enables applications such as just-in-time assistance, behaviour analysis, anomalies detection and emergency notifications. This thesis is aimed at investigating challenges faced in accurately recognising Activities of Daily Living (ADLs) performed by single or multiple inhabitants within smart environments. Specifically, this thesis explores five complementary research challenges in HAR. The first study contributes to knowledge by developing a semantic-enabled data segmentation approach with user-preferences. The second study takes the segmented set of sensor data to investigate and recognise human ADLs at multi-granular action level; coarse- and fine-grained action level. At the coarse-grained actions level, semantic relationships between the sensor, object and ADLs are deduced, whereas, at fine-grained action level, object usage at the satisfactory threshold with the evidence fused from multimodal sensor data is leveraged to verify the intended actions. Moreover, due to imprecise/vague interpretations of multimodal sensors and data fusion challenges, fuzzy set theory and fuzzy web ontology language (fuzzy-OWL) are leveraged. The third study focuses on incorporating uncertainties caused in HAR due to factors such as technological failure, object malfunction, and human errors. Hence, existing studies uncertainty theories and approaches are analysed and based on the findings, probabilistic ontology (PR-OWL) based HAR approach is proposed. The fourth study extends the first three studies to distinguish activities conducted by more than one inhabitant in a shared smart environment with the use of discriminative sensor-based techniques and time-series pattern analysis. The final study investigates in a suitable system architecture with a real-time smart environment tailored to AAL system and proposes microservices architecture with sensor-based off-the-shelf and bespoke sensing methods.
The initial semantic-enabled data segmentation study was evaluated with 100% and 97.8% accuracy to segment sensor events under single and mixed activities scenarios. However, the average classification time taken to segment each sensor events have suffered from 3971ms and 62183ms for single and mixed activities scenarios, respectively. The second study to detect fine-grained-level user actions was evaluated with 30 and 153 fuzzy rules to detect two fine-grained movements with a pre-collected dataset from the real-time smart environment. The result of the second study indicate good average accuracy of 83.33% and 100% but with the high average duration of 24648ms and 105318ms, and posing further challenges for the scalability of fusion rule creations. The third study was evaluated by incorporating PR-OWL ontology with ADL ontologies and Semantic-Sensor-Network (SSN) ontology to define four types of uncertainties presented in the kitchen-based activity. The fourth study illustrated a case study to extended single-user AR to multi-user AR by combining RFID tags and fingerprint sensors discriminative sensors to identify and associate user actions with the aid of time-series analysis. The last study responds to the computations and performance requirements for the four studies by analysing and proposing microservices-based system architecture for AAL system. A future research investigation towards adopting fog/edge computing paradigms from cloud computing is discussed for higher availability, reduced network traffic/energy, cost, and creating a decentralised system.
As a result of the five studies, this thesis develops a knowledge-driven framework to estimate and recognise multi-user activities at fine-grained level user actions. This framework integrates three complementary ontologies to conceptualise factual, fuzzy and uncertainties in the environment/ADLs, time-series analysis and discriminative sensing environment. Moreover, a distributed software architecture, multimodal sensor-based hardware prototypes, and other supportive utility tools such as simulator and synthetic ADL data generator for the experimentation were developed to support the evaluation of the proposed approaches. The distributed system is platform-independent and currently supported by an Android mobile application and web-browser based client interfaces for retrieving information such as live sensor events and HAR results
- …