623 research outputs found

    Using Augmentations as Bridges from Concrete to Abstract Representations

    Get PDF
    We describe a pedagogical approach supporting the acquisition of abstraction skills by apprentices in logistics. Apprentices start with a concrete representation in the form of a small-scale model which aims at engaging them in learning activities. Multiple External Representations are used to progressively introduce more abstract representations displayed on paper-based forms called TinkerSheets. We present the implementation of this approach on the TinkerTable, a tabletop learning environment which is used in two professional schools by four different teachers. We report observations of the use of the environment at different stages of the curriculum with first- and second-year apprentices

    Personalized Prompt for Sequential Recommendation

    Full text link
    Pre-training models have shown their power in sequential recommendation. Recently, prompt has been widely explored and verified for tuning in NLP pre-training, which could help to more effectively and efficiently extract useful knowledge from pre-training models for downstream tasks, especially in cold-start scenarios. However, it is challenging to bring prompt-tuning from NLP to recommendation, since the tokens in recommendation (i.e., items) do not have explicit explainable semantics, and the sequence modeling should be personalized. In this work, we first introduces prompt to recommendation and propose a novel Personalized prompt-based recommendation (PPR) framework for cold-start recommendation. Specifically, we build the personalized soft prefix prompt via a prompt generator based on user profiles and enable a sufficient training of prompts via a prompt-oriented contrastive learning with both prompt- and behavior-based augmentations. We conduct extensive evaluations on various tasks. In both few-shot and zero-shot recommendation, PPR models achieve significant improvements over baselines on various metrics in three large-scale open datasets. We also conduct ablation tests and sparsity analysis for a better understanding of PPR. Moreover, We further verify PPR's universality on different pre-training models, and conduct explorations on PPR's other promising downstream tasks including cross-domain recommendation and user profile prediction

    Knot contact homology and representations of knot groups

    Full text link
    We study certain linear representations of the knot group that induce augmentations of knot contact homology. This perspective on augmentations enhances our understanding of the relationship between the augmentation polynomial and the A-polynomial of the knot. For example, we show that for 2-bridge knots the polynomials agree and that this is never the case for (non-2-bridge) torus knots, nor for a family of 3-bridge pretzel knots. In addition, we obtain a lower bound on the meridional rank of the knot. As a consequence, our results give another proof that torus knots and a family of pretzel knots have meridional rank equal to their bridge number.Comment: revision, published in J. Topolog

    Dividing Complexity to Conquer New Dimensions – Towards a Framework for Designing Augmented Reality Solutions

    Get PDF
    Augmented reality (AR) can foster service innovation and thus cope with some of the most urgent challenges in the service science domain, namely supporting frontline workers while ensuring high safety standards. Therefore, the utilization of AR can help to achieve these goals. On the contrary, AR remains a complex technology with specific requirements and preconditions that demand expertise to overcome them. Based on a case study, we derive a framework for designing AR solutions, which helps divide the complexity of designing and developing AR-based services to support the adoption and diffusion of AR applications. Such an encompassing perspective on initial AR explorations helps to transform the acquired information into a thorough proof of concept, pilot implementations and ultimately productive software

    The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models

    Full text link
    The computer vision world has been re-gaining enthusiasm in various pre-trained models, including both classical ImageNet supervised pre-training and recently emerged self-supervised pre-training such as simCLR and MoCo. Pre-trained weights often boost a wide range of downstream tasks including classification, detection, and segmentation. Latest studies suggest that pre-training benefits from gigantic model capacity. We are hereby curious and ask: after pre-training, does a pre-trained model indeed have to stay large for its downstream transferability? In this paper, we examine supervised and self-supervised pre-trained models through the lens of the lottery ticket hypothesis (LTH). LTH identifies highly sparse matching subnetworks that can be trained in isolation from (nearly) scratch yet still reach the full models' performance. We extend the scope of LTH and question whether matching subnetworks still exist in pre-trained computer vision models, that enjoy the same downstream transfer performance. Our extensive experiments convey an overall positive message: from all pre-trained weights obtained by ImageNet classification, simCLR, and MoCo, we are consistently able to locate such matching subnetworks at 59.04% to 96.48% sparsity that transfer universally to multiple downstream tasks, whose performance see no degradation compared to using full pre-trained weights. Further analyses reveal that subnetworks found from different pre-training tend to yield diverse mask structures and perturbation sensitivities. We conclude that the core LTH observations remain generally relevant in the pre-training paradigm of computer vision, but more delicate discussions are needed in some cases. Codes and pre-trained models will be made available at: https://github.com/VITA-Group/CV_LTH_Pre-training.Comment: CVPR 202

    Towards Parallel Educational Worlds

    Get PDF
    Proceedings of: 2011 IEEE Global Engineering Education Conference (EDUCON 2011): Learning Environments and Ecosystems in Engineering Education. Amman, Jordan, 4-6 April 2011.Augmented Reality, 3D virtual worlds, etc.: the technology has evolved tremendously and so has its application to the field of education. Digital technologies have advanced to the point, where we are reproducing digitally more and more aspects of our life. We have parallel worlds: on the one hand the real world, and on the other virtual worlds, that can in fact be linked to the real one. They have different properties, but they can enrich and complement each other. In this paper, we explore the possibilities and challenges of these parallel worlds for educational uses.The eMadrid Excellence Network is being funded by the Madrid Regional Government (Comunidad de Madrid) with grant No. S2009/TIC-165. We wish to acknowledge stimulating discussions with our partners in the context of the network. Partial support has also been received from the Learn3 project (TIN2008-05163) and the SOLITE project (CYTED 508AC0341).Publicad

    Learning-based Image Scale Estimation for Quantitative Visual Inspection of Civil Structures

    Get PDF
    The number of assets of civil infrastructure (e.g., bridges or roads) have been increasing to meet the demands of growing populations around the world. However, they degrade over time due to environmental factors and must be maintained and monitored to ensure the safety of its users. The increasing number of infrastructure assets which deteriorate over time is fast outpacing the rate at which they are inspected and rehabilitated. Currently, the main mode of structure condition assessment is visual inspection, where human inspectors manually identify, classify, track, and measure, as needed, deterioration over time to make assessments of a structure’s overall condition. However, the current process is highly time consuming, expensive, and subject to the inspector’s judgement and expertise, which could lead to inconsistent assessments of a given structure when surveyed by several different inspectors over a period of time. As a result, there is a clear need for the current inspection process to be improved in terms of efficiency and consistency. Developments in computer vision algorithms, vision sensors, sensing platforms, and high-performance computing have shown promise in improving the current inspection processes to enable consistent and rapid structural assessments. Recent work often involves rapid collection and/or analysis of imagery captured from personnel or mobile data collection platforms (e.g., smart phones, unmanned aerial or ground vehicles) to detect and classify visual features (e.g., structural components or deterioration). These works often involve the use of advanced image processing or computer vision algorithms such as convolutional neural networks to detect and/or classify regions of interest. However, a major shortfall of vision-based inspection is the inability to deduce physical measurements (e.g., mm or cm) from the collected images. The lack of an image scale (e.g., pixel/mm) on 2D images does not permit quantitative inspection. To address this challenge, a learning-based scale estimation technique is proposed. The underlying assumption is that the surface texture of structures, captured in images, contains enough information to estimate scale for each corresponding image (e.g., pixel/mm). This permits the training of a regression model to establish the relationship between surface textures in images and their scales. A convolutional neural network was trained to extract scale-related features from textures captured in images. The trained model is used to estimate scales for all images captured from surfaces of a structure with similar textures in subsequent inspections. The capability of the proposed technique was demonstrated using data collected from surface textures of three different structures. An average scale estimation error, from images of each structure, is less than 15%, which is acceptable in typical visual inspection settings. The source code and data are available from a data repository (GitHub)
    • …
    corecore