133 research outputs found

    Understanding Entrainment in Human Groups: Optimising Human-Robot Collaboration from Lessons Learned during Human-Human Collaboration

    Full text link
    Successful entrainment during collaboration positively affects trust, willingness to collaborate, and likeability towards collaborators. In this paper, we present a mixed-method study to investigate characteristics of successful entrainment leading to pair and group-based synchronisation. Drawing inspiration from industrial settings, we designed a fast-paced, short-cycle repetitive task. Using motion tracking, we investigated entrainment in both dyadic and triadic task completion. Furthermore, we utilise audio-video recordings and semi-structured interviews to contextualise participants' experiences. This paper contributes to the Human-Computer/Robot Interaction (HCI/HRI) literature using a human-centred approach to identify characteristics of entrainment during pair- and group-based collaboration. We present five characteristics related to successful entrainment. These are related to the occurrence of entrainment, leader-follower patterns, interpersonal communication, the importance of the point-of-assembly, and the value of acoustic feedback. Finally, we present three design considerations for future research and design on collaboration with robots.Comment: Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), May 11--16, 2024, Honolulu, HI, US

    Older Generation: Self-Powered IoTs, Home-Life and “Ageing Well”

    Get PDF
    Internet of Things (IoT) technology is found in many homes. These systems enable tasks to be done more effectively or efficiently – e.g., securing property, monitoring and adjusting resources, trackingbehaviours for well-being, and so on. The system presented here was designed with older adults; the vast majority of home IoT systems marketed to this age group are not growth-oriented but rather decline-focused, monitoring and signalling well-being issues. In contrast to both “mainstream” and “older adult” IoT frameworks, then, we present a toolkit designed only to platform reflections,conversations and insights by occupants and visitors in regards to diverse user-defined meaningful home activities: hobbies, socialisation, fun, relaxation, and so on. Furthermore, mindful of the climatecrisis and the battery recharge or replacement requirements in conventional IoT systems, the toolkit is predominantly self-powered. We detail the design process and home deployments, highlighting the value of alternative data presentations from the simplest to LLM-enabled

    AndroZoo: A Retrospective with a Glimpse into the Future

    Get PDF
    peer reviewedIn 2016, we released AndroZoo, a continuously expanding dataset of Android applications that aggregates apps from various sources, including the official Google Play app market. As of today, AndroZoo contains approximately 24 million APK files, making it, to the best of our knowledge, the most extensive dataset of Android APKs accessible to the research community. Currently, an average of 500 000 APKs are downloaded every day, with our initial MSR paper counting more than 880 citations on Google Scholar. Over time, we have consistently expanded AndroZoo, adapting to app markets’ evolution and refining our collection process. Additionally, we have started collecting supplementary data that could be valuable for various Android-related research projects and that we are providing to users, such as app descriptions, upload dates, ratings, lists of permissions, and many other kinds of data. This paper begins with a retrospective analysis of AndroZoo, offering statistics on APK files, apps, users, and downloads. Then, we introduce the new data we are releasing to users, offering insights and examples of their usage

    Enumeration and Identification of Active Users for Grant-Free NOMA Using Deep Neural Networks

    Get PDF
    In next-generation mobile radio systems, multiple access schemes will support a massive number of uncoordinated devices exhibiting sporadic traffic, transmitting short packets to a base station. Grant-free non-orthogonal multiple access (NOMA) has been introduced to provide services to a large number of devices and to reduce the communication overhead in massive machine-type communication (mMTC) scenarios. In grant-free communication, there is no coordination between the device and base station (BS) before the data transmission; therefore, the challenging task of active users detection (AUD) must be conducted at the BS. For NOMA with sparse spreading, we propose a deep neural network (DNN)-based approach for AUD called active users enumeration and identification (AUEI). It consists of two phases: firstly, a DNN is used to estimate the number of active users; then in the second phase, another DNN identifies them. To speed up the training process of the DNNs, we propose a multi-stage transfer learning technique. Our numerical results show a remarkable performance improvement of AUEI in comparison to previously proposed approaches

    Chronicles of CI/CD: A Deep Dive into its Usage Over Time

    Full text link
    DevOps is a combination of methodologies and tools that improves the software development, build, deployment, and monitoring processes by shortening its lifecycle and improving software quality. Part of this process is CI/CD, which embodies mostly the first parts, right up to the deployment. Despite the many benefits of DevOps and CI/CD, it still presents many challenges promoted by the tremendous proliferation of different tools, languages, and syntaxes, which makes the field quite challenging to learn and keep up to date. Software repositories contain data regarding various software practices, tools, and uses. This data can help gather multiple insights that inform technical and academic decision-making. GitHub is currently the most popular software hosting platform and provides a search API that lets users query its repositories. Our goal with this paper is to gain insights into the technologies developers use for CI/CD by analyzing GitHub repositories. Using a list of the state-of-the-art CI/CD technologies, we use the GitHub search API to find repositories using each of these technologies. We also use the API to extract various insights regarding those repositories. We then organize and analyze the data collected. From our analysis, we provide an overview of the use of CI/CD technologies in our days, but also what happened in the last 12 years. We also show developers use several technologies simultaneously in the same project and that the change between technologies is quite common. From these insights, we find several research paths, from how to support the use of multiple technologies, both in terms of techniques, but also in terms of human-computer interaction, to aiding developers in evolving their CI/CD pipelines, again considering the various dimensions of the problem

    I Don't Need an Expert! Making URL Phishing Features Human Comprehensible

    Get PDF

    Using a Multi-Level Process Comparison for Process Change Analysis in Cancer Pathways

    Get PDF
    The area of process change over time is a particular concern in healthcare, where patterns of care emerge and evolve in response to individual patient needs. We propose a structured approach to analyse process change over time that is suitable for the complex domain of healthcare. Our approach applies a qualitative process comparison at three levels of abstraction: a holistic perspective (process model), a middle-level perspective (trace), and a fine-grained detail (activity). Our aim was to detect change points, localise and characterise the change, and unravel/understand the process evolution. We illustrate the approach using a case study of cancer pathways in Leeds where we found evidence of change points identified at multiple levels. In this paper, we extend our study by analysing the miners used in process discovery and providing a deeper analysis of the activity of investigation in trace and activity levels. In the experiment, we show that this qualitative approach provides a useful understanding of process change over time. Examining change at three levels provides confirmatory evidence of process change where perspectives agree, while contradictory evidence can lead to focused discussions with domain experts. This approach should be of interest to others dealing with processes that undergo complex change over time

    Task Graph offloading via Deep Reinforcement Learning in Mobile Edge Computing

    Full text link
    Various mobile applications that comprise dependent tasks are gaining widespread popularity and are increasingly complex. These applications often have low-latency requirements, resulting in a significant surge in demand for computing resources. With the emergence of mobile edge computing (MEC), it becomes the most significant issue to offload the application tasks onto small-scale devices deployed at the edge of the mobile network for obtaining a high-quality user experience. However, since the environment of MEC is dynamic, most existing works focusing on task graph offloading, which rely heavily on expert knowledge or accurate analytical models, fail to fully adapt to such environmental changes, resulting in the reduction of user experience. This paper investigates the task graph offloading in MEC, considering the time-varying computation capabilities of edge computing devices. To adapt to environmental changes, we model the task graph scheduling for computation offloading as a Markov Decision Process (MDP). Then, we design a deep reinforcement learning algorithm (SATA-DRL) to learn the task scheduling strategy from the interaction with the environment, to improve user experience. Extensive simulations validate that SATA-DRL is superior to existing strategies in terms of reducing average makespan and deadline violation.Comment: 13 figure

    Reverseorc:Reverse engineering of resizable user interface layouts with or-constraints

    Get PDF
    Reverse engineering (RE) of user interfaces (UIs) plays an important role in software evolution. However, the large diversity of UI technologies and the need for UIs to be resizable make this challenging. We propose ReverseORC, a novel RE approach able to discover diverse layout types and their dynamic resizing behaviours independently of their implementation, and to specify them by using OR constraints. Unlike previous RE approaches, ReverseORC infers flexible layout constraint specifications by sampling UIs at different sizes and analyzing the differences between them. It can create specifications that replicate even some non-standard layout managers with complex dynamic layout behaviours. We demonstrate that ReverseORC works across different platforms with very different layout approaches, e.g., for GUIs as well as for the Web. Furthermore, it can be used to detect and fix problems in legacy UIs, extend UIs with enhanced layout behaviours, and support the creation of flexible UI layouts.Comment: CHI2021 Full Pape

    Edge intelligence in smart grids : a survey on architectures, offloading models, cyber security measures, and challenges

    Get PDF
    The rapid development of new information and communication technologies (ICTs) and the deployment of advanced Internet of Things (IoT)-based devices has led to the study and implementation of edge computing technologies in smart grid (SG) systems. In addition, substantial work has been expended in the literature to incorporate artificial intelligence (AI) techniques into edge computing, resulting in the promising concept of edge intelligence (EI). Consequently, in this article, we provide an overview of the current state-of-the-art in terms of EI-based SG adoption from a range of angles, including architectures, computation offloading, and cybersecurity c oncerns. The basic objectives of this article are fourfold. To begin, we discuss EI and SGs separately. Then we highlight contemporary concepts closely related to edge computing, fundamental characteristics, and essential enabling technologies from an EI perspective. Additionally, we discuss how the use of AI has aided in optimizing the performance of edge computing. We have emphasized the important enabling technologies and applications of SGs from the perspective of EI-based SGs. Second, we explore both general edge computing and architectures based on EI from the perspective of SGs. Thirdly, two basic questions about computation offloading are discussed: what is computation offloading and why do we need it? Additionally, we divided the primary articles into two categories based on the number of users included in the model, either a single user or a multiple user instance. Finally, we review the cybersecurity threats with edge computing and the methods used to mitigate them in SGs. Therefore, this survey comes to the conclusion that most of the viable architectures for EI in smart grids often consist of three layers: device, edge, and cloud. In addition, it is crucial that computation offloading techniques must be framed as optimization problems and addressed effectively in order to increase system performance. This article typically intends to serve as a primer for emerging and interested scholars concerned with the study of EI in SGs.The Council for Scientific and Industrial Research (CSIR).https://www.mdpi.com/journal/jsanElectrical, Electronic and Computer Engineerin
    • 

    corecore