19 research outputs found

    A Newcomer's Guide to EICS, the Engineering Interactive Computing Systems Community

    Full text link
    [EN] Welcome to EICS, the Engineering Interactive Computing Systems community, PACMHCI/EICS journal, and annual conference! In this short article, we introduce newcomers to the field and to our community with an overview of what EICS is and how it positions with respect to other venues in Human-Computer Interaction, such as CHI, UIST, and IUI, highlighting its legacy and paying homage to past scientific events from which EICS emerged. We also take this opportunity to enumerate and exemplify scientific contributions to the field of Engineering Interactive Computing Systems, which we hope to guide researchers and practitioners towards making their future PACMHCI/EICS submissions successful and impactful in the EICS community.We acknowledge the support of MetaDev2 as the main sponsor of EICS 2019. We would like to thank the Chairs of all the tracks of the EICS 2019 conference, the members of the local organization team, and the web master of the EICS 2019 web site. EICS 2019 could not have been possible without the commitment of the Programme Committee members and external reviewers. This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness, State Research Agency / European Regional Development Fund under Vi-SMARt (TIN2016-79100-R), the Junta de Comunidades de Castilla-La Mancha European Regional Development Fund under NeUX (SBPLY/17/180501/000192) projects, the Generalitat Valenciana through project GISPRO (PROMETEO/2018/176), and the Spanish Ministry of Science and Innovation through project DataME (TIN2016-80811-P).López-Jaquero, VM.; Vatavu, R.; Panach, JI.; Pastor López, O.; Vanderdonckt, J. (2019). A Newcomer's Guide to EICS, the Engineering Interactive Computing Systems Community. Proceedings of the ACM on Human-Computer Interaction. 3:1-9. https://doi.org/10.1145/3300960S193Bastide, R., Palanque, P., & Roth, J. (Eds.). (2005). Engineering Human Computer Interaction and Interactive Systems. Lecture Notes in Computer Science. doi:10.1007/b136790Beaudouin-Lafon, M. (2004). Designing interaction, not interfaces. Proceedings of the working conference on Advanced visual interfaces - AVI ’04. doi:10.1145/989863.989865Bodart, F., & Vanderdonckt, J. (Eds.). (1996). Design, Specification and Verification of Interactive Systems ’96. Eurographics. doi:10.1007/978-3-7091-7491-3Gallud, J. A., Tesoriero, R., Vanderdonckt, J., Lozano, M., Penichet, V., & Botella, F. (2011). Distributed user interfaces. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems - CHI EA ’11. doi:10.1145/1979742.1979576Graham, T. C. N., & Palanque, P. (Eds.). (2008). Interactive Systems. Design, Specification, and Verification. Lecture Notes in Computer Science. doi:10.1007/978-3-540-70569-7Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’09. (2009). doi:10.1145/1570433Lawson, J.-Y. L., Vanderdonckt, J., & Vatavu, R.-D. (2018). Mass-Computer Interaction for Thousands of Users and Beyond. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. doi:10.1145/3170427.3188465Lozano, M. D., Galllud, J. A., Tesoriero, R., Penichet, V. M. R., Vanderdonckt, J., & Fardoun, H. (2013). 3rd workshop on distributed user interfaces. Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’13. doi:10.1145/2494603.2483222Proceedings of the 2014 Workshop on Distributed User Interfaces and Multimodal Interaction - DUI ’14. (2014). doi:10.1145/2677356Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. (2019). doi:10.1145/3319499Tesoriero, R., Lozano, M., Vanderdonckt, J., Gallud, J. A., & Penichet, V. M. R. (2012). distributed user interfaces. CHI ’12 Extended Abstracts on Human Factors in Computing Systems. doi:10.1145/2212776.2212704Vanderdonckt, J. (2005). A MDA-Compliant Environment for Developing User Interfaces of Information Systems. Active Flow and Combustion Control 2018, 16-31. doi:10.1007/11431855_2Vatavu, R.-D. (2012). User-defined gestures for free-hand TV control. Proceedings of the 10th European conference on Interactive tv and video - EuroiTV ’12. doi:10.1145/2325616.2325626Vatavu, R.-D. (2017). Beyond Features for Recognition: Human-Readable Measures to Understand Users’ Whole-Body Gesture Performance. International Journal of Human–Computer Interaction, 33(9), 713-730. doi:10.1080/10447318.2017.1278897Wobbrock, J. O., & Kientz, J. A. (2016). Research contributions in human-computer interaction. Interactions, 23(3), 38-44. doi:10.1145/290706

    Trends on engineering interactive systems: an overview of works presented in workshops at EICS 2019

    Get PDF
    Workshops are a great opportunity for identifying innovative topics of research that might require discussion and maturation. This paper summarizes the outcomes of the workshops track of the 11th Engineering Interactive Computing Systems conference (EICS 2019), held in Valencia (Spain) on 18-21 June 2019. The track featured three workshops, one half-day, one full-day and one two-days workshop, each focused on specific topics of the ongoing research in engineering usable and effective interactive computing systems. In particular, the list of discussed topics include novel forms of interaction and emerging themes in HCI related to new application domains, more efficient and enjoyable interaction possibilities associated to smart objects and smart environments, challenges faced in designing, developing and using interactive systems involving multiple stakeholders.- (undefined

    A System for Real-Time Interactive Analysis of Deep Learning Training

    Full text link
    Performing diagnosis or exploratory analysis during the training of deep learning models is challenging but often necessary for making a sequence of decisions guided by the incremental observations. Currently available systems for this purpose are limited to monitoring only the logged data that must be specified before the training process starts. Each time a new information is desired, a cycle of stop-change-restart is required in the training process. These limitations make interactive exploration and diagnosis tasks difficult, imposing long tedious iterations during the model development. We present a new system that enables users to perform interactive queries on live processes generating real-time information that can be rendered in multiple formats on multiple surfaces in the form of several desired visualizations simultaneously. To achieve this, we model various exploratory inspection and diagnostic tasks for deep learning training processes as specifications for streams using a map-reduce paradigm with which many data scientists are already familiar. Our design achieves generality and extensibility by defining composable primitives which is a fundamentally different approach than is used by currently available systems. The open source implementation of our system is available as TensorWatch project at https://github.com/microsoft/tensorwatch.Comment: Accepted at ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS 2019). Code available as TensorWatch project at https://github.com/microsoft/tensorwatc

    Polyphony: Programming Interfaces and Interactions with the Entity-Component-System Model

    Get PDF
    International audienceThis paper introduces a new Graphical User Interface (GUI) and Interaction framework based on the Entity-Component-System model (ECS). In this model, interactive elements (Entities) are characterized only by their data (Components). Behaviors are managed by continuously running processes (Systems) which select entities by the Components they possess. This model facilitates the handling of behaviors and promotes their reuse. It provides developers with a simple yet powerful composition pattern to build new interactive elements with Components. It materializes interaction devices as Entities and interaction techniques as a sequence of Systems operating on them. We present Polyphony, an experimental toolkit implementing this approach, and discuss our interpretation of the ECS model in the context of GUIs programming

    Proceedings of the 7th Workshop on Interacting with Smart Objects

    Get PDF

    IVY 2-A model-based analysis tool

    Get PDF
    The IVY workbench is a model-based tool that supports the formal verification of interactive computing systems. It adopts a plugin-based architecture to support a flexible development model. Over the years the chosen architectural solution revealed a number of limitations, resulting both from technological deprecation of some of the adopted solutions and a better understanding of the verification process to support. This paper presents the redesign and implementation of the original plugin infrastructure, originating a new version of the tool: IVY 2. It describes the limitations of the original solutions and the new architecture, which resorts to the Java module system in order to solve them.This work is financed by National Funds through the Portuguese funding agency, FCT - Fundacao para a Ciencia e a Tecnologia (Portuguese Foundation for Science and Technology) within project: UID/EEA/50014/2019

    Proficiency-aware systems

    Get PDF
    In an increasingly digital world, technological developments such as data-driven algorithms and context-aware applications create opportunities for novel human-computer interaction (HCI). We argue that these systems have the latent potential to stimulate users and encourage personal growth. However, users increasingly rely on the intelligence of interactive systems. Thus, it remains a challenge to design for proficiency awareness, essentially demanding increased user attention whilst preserving user engagement. Designing and implementing systems that allow users to become aware of their own proficiency and encourage them to recognize learning benefits is the primary goal of this research. In this thesis, we introduce the concept of proficiency-aware systems as one solution. In our definition, proficiency-aware systems use estimates of the user's proficiency to tailor the interaction in a domain and facilitate a reflective understanding for this proficiency. We envision that proficiency-aware systems leverage collected data for learning benefit. Here, we see self-reflection as a key for users to become aware of necessary efforts to advance their proficiency. A key challenge for proficiency-aware systems is the fact that users often have a different self-perception of their proficiency. The benefits of personal growth and advancing one's repertoire might not necessarily be apparent to users, alienating them, and possibly leading to abandoning the system. To tackle this challenge, this work does not rely on learning strategies but rather focuses on the capabilities of interactive systems to provide users with the necessary means to reflect on their proficiency, such as showing calculated text difficulty to a newspaper editor or visualizing muscle activity to a passionate sportsperson. We first elaborate on how proficiency can be detected and quantified in the context of interactive systems using physiological sensing technologies. Through developing interaction scenarios, we demonstrate the feasibility of gaze- and electromyography-based proficiency-aware systems by utilizing machine learning algorithms that can estimate users' proficiency levels for stationary vision-dominant tasks (reading, information intake) and dynamic manual tasks (playing instruments, fitness exercises). Secondly, we show how to facilitate proficiency awareness for users, including design challenges on when and how to communicate proficiency. We complement this second part by highlighting the necessity of toolkits for sensing modalities to enable the implementation of proficiency-aware systems for a wide audience. In this thesis, we contribute a definition of proficiency-aware systems, which we illustrate by designing and implementing interactive systems. We derive technical requirements for real-time, objective proficiency assessment and identify design qualities of communicating proficiency through user reflection. We summarize our findings in a set of design and engineering guidelines for proficiency awareness in interactive systems, highlighting that proficiency feedback makes performance interpretable for the user.In einer zunehmend digitalen Welt schaffen technologische Entwicklungen - wie datengesteuerte Algorithmen und kontextabhängige Anwendungen - neuartige Interaktionsmöglichkeiten mit digitalen Geräten. Jedoch verlassen sich Nutzer oftmals auf die Intelligenz dieser Systeme, ohne dabei selbst auf eine persönliche Weiterentwicklung hinzuwirken. Wird ein solches Vorgehen angestrebt, verlangt dies seitens der Anwender eine erhöhte Aufmerksamkeit. Es ist daher herausfordernd, ein entsprechendes Design für Kompetenzbewusstsein (Proficiency Awareness) zu etablieren. Das primäre Ziel dieser Arbeit ist es, eine Methodik für das Design und die Implementierung von interaktiven Systemen aufzustellen, die Nutzer dabei unterstützen über ihre eigene Kompetenz zu reflektieren, um dadurch Lerneffekte implizit wahrnehmen können. Diese Arbeit stellt ein Konzept für fähigkeitsbewusste Systeme (proficiency-aware systems) vor, welche die Fähigkeiten von Nutzern abschätzen, die Interaktion entsprechend anpassen sowie das Bewusstsein der Nutzer über deren Fähigkeiten fördern. Hierzu sollten die Systeme gesammelte Daten von Nutzern einsetzen, um Lerneffekte sichtbar zu machen. Die Möglichkeit der Anwender zur Selbstreflexion ist hierbei als entscheidend anzusehen, um als Motivation zur Verbesserung der eigenen Fähigkeiten zu dienen. Eine zentrale Herausforderung solcher Systeme ist die Tatsache, dass Nutzer - im Vergleich zur Abschätzung des Systems - oft eine divergierende Selbstwahrnehmung ihrer Kompetenz haben. Im ersten Moment sind daher die Vorteile einer persönlichen Weiterentwicklung nicht unbedingt ersichtlich. Daher baut diese Forschungsarbeit nicht darauf auf, Nutzer über vorgegebene Lernstrategien zu unterrichten, sondern sie bedient sich der Möglichkeiten interaktiver Systeme, die Anwendern die notwendigen Hilfsmittel zur Verfügung stellen, damit diese selbst über ihre Fähigkeiten reflektieren können. Einem Zeitungseditor könnte beispielsweise die aktuelle Textschwierigkeit angezeigt werden, während einem passionierten Sportler dessen Muskelaktivität veranschaulicht wird. Zunächst wird herausgearbeitet, wie sich die Fähigkeiten der Nutzer mittels physiologischer Sensortechnologien erkennen und quantifizieren lassen. Die Evaluation von Interaktionsszenarien demonstriert die Umsetzbarkeit fähigkeitsbewusster Systeme, basierend auf der Analyse von Blickbewegungen und Muskelaktivität. Hierbei kommen Algorithmen des maschinellen Lernens zum Einsatz, die das Leistungsniveau der Anwender für verschiedene Tätigkeiten berechnen. Im Besonderen analysieren wir stationäre Aktivitäten, die hauptsächlich den Sehsinn ansprechen (Lesen, Aufnahme von Informationen), sowie dynamische Betätigungen, die die Motorik der Nutzer fordern (Spielen von Instrumenten, Fitnessübungen). Der zweite Teil zeigt auf, wie Systeme das Bewusstsein der Anwender für deren eigene Fähigkeiten fördern können, einschließlich der Designherausforderungen , wann und wie das System erkannte Fähigkeiten kommunizieren sollte. Abschließend wird die Notwendigkeit von Toolkits für Sensortechnologien hervorgehoben, um die Implementierung derartiger Systeme für ein breites Publikum zu ermöglichen. Die Forschungsarbeit beinhaltet eine Definition für fähigkeitsbewusste Systeme und veranschaulicht dieses Konzept durch den Entwurf und die Implementierung interaktiver Systeme. Ferner werden technische Anforderungen objektiver Echtzeitabschätzung von Nutzerfähigkeiten erforscht und Designqualitäten für die Kommunikation dieser Abschätzungen mittels Selbstreflexion identifiziert. Zusammengefasst sind die Erkenntnisse in einer Reihe von Design- und Entwicklungsrichtlinien für derartige Systeme. Insbesondere die Kommunikation, der vom System erkannten Kompetenz, hilft Anwendern, die eigene Leistung zu interpretieren

    A UX model for the evaluation of learners' experience on lms platforms over time

    Get PDF
    Although user experience (UX) is dynamic and evolves over time, prior research reported that the learners' experience models developed so far were only for the static evaluation of learners' experiences. So far, no model has been developed for the dynamic summative evaluation of the UX of LMS platforms over time. The objective of this study is to build a UX model that will be used to evaluate learners' experience on LMS over time. The study reviewed relevant literature with the goal of conceptualizing a theoretical model. The Stimuli-Organism-Response (SOR) framework was deployed to model the experience engineering process. To verify the model, 6 UX experts were involved. The model was also validated using a quasi-experimental design involving 900 students. The evaluation was conducted in four time points, once a week for four weeks. From the review, a conceptual UX model was developed for the evaluation of learners' experience with LMS design over time. The outcome of the model verification shows that the experts agreed that the model is adequate for the evaluation of learners' experience on LMS. The results of the model validation indicate that the model was highly statistically significant over time (Week 1: x2(276) = 273 I 9.339, Week2: x2(276) = 23419.626, Week3: x2(276) =18941.900, Week4: x2(276) = 27580.397, p=000<0.01). Each design quality had strong positive effects on the learners' cognitive, sensorimotor and affective states respectively. Furthermore, each of the three organismic states: cognitive, sensorimotor, and affective, had strong positive influence on learners' overall learning experience. These results imply that the experience engineering process was successful. The study fills a significant gap in knowledge by contributing a novel UX model for the evaluation of learners' experience on LMS platforms over time. UX quality assurance practitioners can also utilize the model in the verification and validation of learner experience over tim

    An Evidence-based Roadmap for IoT Software Systems Engineering

    Full text link
    Context: The Internet of Things (IoT) has brought expectations for software inclusion in everyday objects. However, it has challenges and requires multidisciplinary technical knowledge involving different areas that should be combined to enable IoT software systems engineering. Goal: To present an evidence-based roadmap for IoT development to support developers in specifying, designing, and implementing IoT systems. Method: An iterative approach based on experimental studies to acquire evidence to define the IoT Roadmap. Next, the Systems Engineering Body of Knowledge life cycle was used to organize the roadmap and set temporal dimensions for IoT software systems engineering. Results: The studies revealed seven IoT Facets influencing IoT development. The IoT Roadmap comprises 117 items organized into 29 categories representing different concerns for each Facet. In addition, an experimental study was conducted observing a real case of a healthcare IoT project, indicating the roadmap applicability. Conclusions: The IoT Roadmap can be a feasible instrument to assist IoT software systems engineering because it can (a) support researchers and practitioners in understanding and characterizing the IoT and (b) provide a checklist to identify the applicable recommendations for engineering IoT software systems
    corecore