10 research outputs found

    Vasco~: Outil interactif pour l'exploration précoce des données

    Get PDF
    National audienceWe describe Vasco, a data visualization tool for inexperienced users. Vasco is designed to allow and promote early exploration of data, targeting users without experience in the design of visualizations and data analysis. Vasco structures the interface to select easily data and create visualizations, using panels and cards. Vasco automatically generates the best graphics according to the selection of variables and data morphology. In addition, Vasco helps give the user control and organization with multiple workspaces. Finally, a controlled study, that compares the usability of Vasco with Voyager 2, shows that users find helpful the presence of the dimensions data, plus the iterative nature of the exploration, help users understand the data visualization process, make them feel more confident and more performant.Nous décrivons Vasco, un outil de visualisation de données pour les utilisateurs inexpérimentés. Vasco est conçu pour permettre et promouvoir l'exploration précoce des données, en ciblant les utilisateurs sans expérience dans la conception de visualisations et l'analyse des données. Vasco structure l'interface pour sélectionner facilement les données et créer des visualisations à l'aide de panneaux et de cartes.Vasco génère automatiquement des graphiques adéquats en fonction des variables sélectionnées et de la morphologie des données. De plus, Vasco permet aux utilisateurs de contrôler et d'organiser leur processus de conception grâce à des espaces de travail multiples. Enfin, une étude contrôlée, qui compare l'utilisabilité de Vasco avec celle de Voyager 2, montre que la présentation constante de la taxonomie des données et la nature itérative de l'exploration aident les utilisateurs à comprendre le processus de visualisation des données, à se sentir plus confiants et plus performants

    BED‐online: Acceptance and efficacy of an internet‐based treatment for binge‐eating disorder: A randomized clinical trial including waitlist conditions

    Get PDF
    Objective: Internet-based guided self-help (GSH) programs increase accessibility and utilization of evidence-based treatments in binge-eating disorder (BED). We evaluated acceptance and short as well as long-term efficacy of our 8-session internet-based GSH program in a randomized clinical trial with an immediate treatment group, and two waitlist control groups, which differed with respect to whether patients received positive expectation induction during waiting or not. Method: Sixty-three patients (87% female, mean age 37.2 years) followed the eight-session guided cognitive-behavioural internet-based program and three booster sessions in a randomized clinical trial design including an immediate treatment and two waitlist control conditions. Outcomes were treatment acceptance, number of weekly binge-eating episodes, eating disorder pathology, depressiveness, and level of psychosocial functioning. Results: Treatment satisfaction was high, even though 27% of all patients dropped out during the active treatment and 9.5% during the follow-up period of 6 months. The treatment, in contrast to the waiting conditions, led to a significant reduction of weekly binge-eating episodes from 3.4 to 1.7 with no apparent rebound effect during follow-up. All other outcomes improved as well during active treatment. Email-based positive expectation induction during waiting period prior to the treatment did not have an additional beneficial effect on the temporal course and thus treatment success, of binge episodes in this study. Conclusion: This short internet-based program was clearly accepted and highly effective regarding core features of BED. Dropout rates were higher in the active and lower in the follow-up period. Positive expectations did not have an impact on treatment effects

    A descriptive attribute-based framework for annotations in data visualization

    No full text
    Annotations are observations made during the exploration of a specific data visualization, which can be recorded as text or visual data selection. This article introduces a classification framework that allows a systematic description of annotations. To create the framework, a real dataset of 302 annotations authored by 16 analysts was collected. Then, three coders independently described the annotations by eliciting categories that emerged from the data. This process was repeated for several iterative phases, until a high inter-coder agreement was reached. The final descriptive attribute-based framework comprises the following dimensions: insight on data, multiple observations, data units, level of interpretation, co-references and detected patterns. This framework has the potential to provide a common ground to assess the expressiveness of different types of visualization over the same data. This potential is further illustrated in a concrete use case

    Colvis—A Structured Annotation Acquisition System for Data Visualization

    No full text
    Annotations produced by analysts during the exploration of a data visualization are a precious source of knowledge. Harnessing this knowledge requires a thorough structure of annotations, but also a means to acquire them without harming user engagement. The main contribution of this article is a method, taking the form of an interface, that offers a comprehensive “subject-verb-complement” set of steps for analysts to take annotations, and seamlessly translate these annotations within a prior classification framework. Technical considerations are also an integral part of this study: through a concrete web implementation, we prove the feasibility of our method, but also highlight some of the unresolved challenges that remain to be addressed. After explaining all concepts related to our work, from a literature review to JSON Specifications, we follow by showing two use cases that illustrate how the interface can work in concrete situations. We conclude with a substantial discussion of the limitations, the current state of the method and the upcoming steps for this annotation interface.</jats:p

    Colvis ::a structured annotation acquisition system for data visualization

    No full text
    Annotations produced by analysts during the exploration of a data visualization are a precious source of knowledge. Harnessing this knowledge requires a thorough structure of annotations, but also ameans to acquire themwithout harming user engagement. Themain contribution of this article is amethod, taking the formof an interface, that offers a comprehensive “subject-verb-complement” set of steps for analysts to take annotations, and seamlessly translate these annotations within a prior classification framework. Technical considerations are also an integral part of this study: through a concrete web implementation, we prove the feasibility of our method, but also highlight some of the unresolved challenges that remain to be addressed. After explaining all concepts related to our work, from a literature review to JSON Specifications, we follow by showing two use cases that illustrate how the interface can work in concrete situations. We conclude with a substantial discussion of the limitations, the current state of the method and the upcoming steps for this annotation interface

    Designing a Classification for User-authored Annotations in Data Visualization

    No full text

    4D-XCT monitoring of void formation in thick methacrylic composites produced by infusion

    No full text
    Void formation in fibre-reinforced polymer composites processed by liquid moulding is a persistent issue in composite research and development. Not only may several void formation mechanisms come into play, but these defects can also accumulate and grow over the course of the manufacturing process. Mitigation methods can be applied once the underlying causes of voiding are identified, which is impossible through post-mortem characterisation. Here, void formation is dynamically monitored in miniaturised glass fibrereinforced thermoplastic polymer composite samples manufactured by vacuum infusion and in-situ polymerisation, using laboratory-based X-ray computed tomography (XCT). The method allows the characterisation of the evolution of void patterns as the resin polymerises and cools down within the fibre preform. With a time resolution of 2 min and a voxel size close to 20 μm, this first-of-a-kind XCT experiment provides insights into the evolution of the void volume fraction, void size and location. The root causes leading to void formation in the system of interest were successfully identified as a combination of flow-related air entrapment during preform filling and, mostly, of chemical shrinkage of the matrix upon polymerisation. Additionally, thermal shrinkage during the cooling of the preform results in a slight decrease in the final void volume fraction
    corecore