9 research outputs found

    gafro: Geometric Algebra for Robotics

    Full text link
    Geometry is a fundamental part of robotics and there have been various frameworks of representation over the years. Recently, geometric algebra has gained attention for its property of unifying many of those previous ideas into one algebra. While there are already efficient open-source implementations of geometric algebra available, none of them is targeted at robotics applications. We want to address this shortcoming with our library gafro. This article presents an overview of the implementation details as well as a tutorial of gafro, an efficient c++ library targeting robotics applications using geometric algebra. The library focuses on using conformal geometric algebra. Hence, various geometric primitives are available for computation as well as rigid body transformations. The modeling of robotic systems is also an important aspect of the library. It implements various algorithms for calculating the kinematics and dynamics of such systems as well as objectives for optimisation problems. The software stack is completed by python bindings in pygafro and a ROS interface in gafro_ros

    Google Portrait

    Get PDF
    This paper presents a system to retrieve and browse images from the Internet containing only one particular object of interest: the human face. This system, called Google Portrait, uses Google Image search engine to retrieve images matching a text query and filters images containing faces using a face detector. Results and ranked by portraits and a tagging module is provided to change manually the label attached to faces

    Response Burden and Dropout in a Probability-Based Online Panel Study – A Comparison between an App and Browser-Based Design

    Get PDF
    Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three- wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1

    Data Privacy Concerns as a Source of Resistance to Complete Mobile Data Collection Tasks Via a Smartphone App

    Get PDF
    Smartphones present many interesting opportunities for survey research, particularly through the use of mobile data collection applications (apps). There is still much to learn, however, about how to integrate apps in general population surveys. Recent studies investigating hypothetical willingness to complete mobile data collection tasks via an app suggest there may be substantial resistance, in particular, due to concerns around data privacy. There is not much evidence about how privacy concerns influence actual decisions to participate in app-based surveys. Theoretical approaches to understanding privacy concerns and survey participation decisions would suggest that the influence of the former over the latter is likely to vary situationally. In this paper, we present results from a methodological experiment conducted in the context of a three-wave probability-based online panel survey of the general population as part of the 2019 Swiss Election Study ("Selects") testing different ways of recruiting participants to an app. Questions included at wave 1 about online data privacy concerns and comfort sharing different types of data with academic researchers allow us to assess their impact on both hypothetical willingness to download a survey app for completing questionnaires, to take and share photos, and to share the smartphone's GPS location and actual completion of these tasks. Our findings confirm that general concerns about online data privacy do influence hypothetical willingness to complete mobile data collection tasks, but may be overridden by how comfortable people feel about sharing specific types of data with researchers. When it comes to actual compliance with task requests, however, neither privacy concerns nor comfort sharing data seem to matter. We conclude with recommendations for exploring these relationships further in future app-based studies

    The MASH Project

    Get PDF
    It has been demonstrated repeatedly that combining multiple types of image features improves the performance of learning-based classification and regression. However, no tools exist to facilitate the creation of large pools of feature extractors by extended teams of contributors. The MASH project aims at creating such tools. It is organized around the development of a collaborative web platform where participants can contribute feature extractors, browse a repository of existing ones, run image classification and goal-planning experiments, and participate in public large-scale experiments and contests. The tools provided on the platform facilitate the analysis of experimental results. In particular, they rank the feature extractors according to their efficiency, and help to identify the failure mode of the prediction system.

    Response burden and dropout in a probability-based online panel study – a comparison between an app and browser-based design.Response burden and dropout in a probability-based online panel study – a comparison between an app and browser-based design.

    Get PDF
    Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three- wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1

    Metal Nanoparticle Synthesis

    No full text
    corecore