319,032 research outputs found

    Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control Tools

    Get PDF
    Many people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify common core design features and intervention strategies afforded by current tools for digital self-control. Second, we adapt and apply an integrative dual systems model of self-regulation as a framework for organising and evaluating the design features found. Our analysis aims to help the design of better tools in two ways: (i) by identifying how, through a well-established model of self-regulation, current tools overlap and differ in how they support self-control; and (ii) by using the model to reveal underexplored cognitive mechanisms that could aid the design of new tools.Comment: 11.5 pages (excl. references), 6 figures, 1 tabl

    Development and Evaluation of the Nebraska Assessment of Computing Knowledge

    Get PDF
    One way to increase the quality of computing education research is to increase the quality of the measurement tools that are available to researchers, especially measures of students’ knowledge and skills. This paper represents a step toward increasing the number of available thoroughly-evaluated tests that can be used in computing education research by evaluating the psychometric properties of a multiple-choice test designed to differentiate undergraduate students in terms of their mastery of foundational computing concepts. Classical test theory and item response theory analyses are reported and indicate that the test is a reliable, psychometrically-sound instrument suitable for research with undergraduate students. Limitations and the importance of using standardized measures of learning in education research are discussed

    Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    Full text link
    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetization tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii)

    Multi-layer Architecture For Storing Visual Data Based on WCF and Microsoft SQL Server Database

    Full text link
    In this paper we present a novel architecture for storing visual data. Effective storing, browsing and searching collections of images is one of the most important challenges of computer science. The design of architecture for storing such data requires a set of tools and frameworks such as SQL database management systems and service-oriented frameworks. The proposed solution is based on a multi-layer architecture, which allows to replace any component without recompilation of other components. The approach contains five components, i.e. Model, Base Engine, Concrete Engine, CBIR service and Presentation. They were based on two well-known design patterns: Dependency Injection and Inverse of Control. For experimental purposes we implemented the SURF local interest point detector as a feature extractor and KK-means clustering as indexer. The presented architecture is intended for content-based retrieval systems simulation purposes as well as for real-world CBIR tasks.Comment: Accepted for the 14th International Conference on Artificial Intelligence and Soft Computing, ICAISC, June 14-18, 2015, Zakopane, Polan

    Chiral phase transition of (2+1)-flavor QCD

    Full text link
    We present here results on the determination of the critical temperature in the chiral limit for (2+1)-flavor QCD. We propose two novel estimators of the chiral critical temperature where quark mass dependence is strongly suppressed compared to the conventional estimator using pseudo-critical temperatures. We have used the HISQ/tree action for the numerical simulation with lattices with three different temporal extent Nτ=N_{\tau}=6, 8, 12 and varied the aspect ratio over the range 4Nσ/Nτ84 \leq N_{\sigma}/N_{\tau} \leq 8. To approach the chiral limit, the light quark mass has been decreased keeping the strange quark mass fixed at its physical value. Our simulations correspond to the range of pion masses, 55 MeV mπ\leq m_{\pi} \leq 160 MeV.Comment: Prepared for the proceedings of Quark Matter 201

    Microservice Transition and its Granularity Problem: A Systematic Mapping Study

    Get PDF
    Microservices have gained wide recognition and acceptance in software industries as an emerging architectural style for autonomic, scalable, and more reliable computing. The transition to microservices has been highly motivated by the need for better alignment of technical design decisions with improving value potentials of architectures. Despite microservices' popularity, research still lacks disciplined understanding of transition and consensus on the principles and activities underlying "micro-ing" architectures. In this paper, we report on a systematic mapping study that consolidates various views, approaches and activities that commonly assist in the transition to microservices. The study aims to provide a better understanding of the transition; it also contributes a working definition of the transition and technical activities underlying it. We term the transition and technical activities leading to microservice architectures as microservitization. We then shed light on a fundamental problem of microservitization: microservice granularity and reasoning about its adaptation as first-class entities. This study reviews state-of-the-art and -practice related to reasoning about microservice granularity; it reviews modelling approaches, aspects considered, guidelines and processes used to reason about microservice granularity. This study identifies opportunities for future research and development related to reasoning about microservice granularity.Comment: 36 pages including references, 6 figures, and 3 table
    corecore