236 research outputs found

    What Is sharing: Sustainable lifestyle and neighbor friendship In student community

    Get PDF
    In the last decade, sharing economy, or its other names, has been discussed so much. Besides convenience and saving money, sustainable lifestyle and neighbor friendship are more beautiful things that sharing offers us. On the other side, it also leaves us full of confusion. Many researchers and entrepreneurs are arguing how to define it. In this research, I went through the beginning and evolution of sharing economy and give my own definition. Through case study of NeighborGoods, Peerby and Pumpipumpe, I summarized the valuable knowledge of their practice and some drawbacks that need to be overcome. After that, I started my user research in Helsinki student communities and understood how people there think about sharing. As an output, I made Shrgrp, a service that helps community members to share. Through all these design research and practice, I get to know how sharing are reforming our lifestyle, our relationship with neighbors and the community we are living

    Cohomologies, extensions and deformations of differential algebras with any weights

    Full text link
    As an algebraic study of differential equations, differential algebras have been studied for a century and and become an important area of mathematics. In recent years the area has been expended to the noncommutative associative and Lie algebra contexts and to the case when the operator identity has a weight in order to include difference operators and difference algebras. This paper provides a cohomology theory for differential algebras of any weights. This gives a uniform approach to both the zero weight case which is similar to the earlier study of differential Lie algebras, and the non-zero weight case which poses new challenges. As applications, abelian extensions of a differential algebra are classified by the second cohomology group. Furthermore, formal deformations of differential algebras are obtained and the rigidity of a differential algebra is characterized by the vanishing of the second cohomology group.Comment: 21 page

    Homotopy Rota-Baxter operators, homotopy O\mathcal{O}-operators and homotopy post-Lie algebras

    Full text link
    Rota-Baxter operators, O\mathcal{O}-operators on Lie algebras and their interconnected pre-Lie and post-Lie algebras are important algebraic structures with applications in mathematical physics. This paper introduces the notions of a homotopy Rota-Baxter operator and a homotopy O\mathcal{O}-operator on a symmetric graded Lie algebra. Their characterization by Maurer-Cartan elements of suitable differential graded Lie algebras is provided. Through the action of a homotopy O\mathcal{O}-operator on a symmetric graded Lie algebra, we arrive at the notion of an operator homotopy post-Lie algebra, together with its characterization in terms of Maurer-Cartan elements. A cohomology theory of post-Lie algebras is established, with an application to 2-term skeletal operator homotopy post-Lie algebras.Comment: 29 page

    Towards Higher Ranks via Adversarial Weight Pruning

    Full text link
    Convolutional Neural Networks (CNNs) are hard to deploy on edge devices due to its high computation and storage complexities. As a common practice for model compression, network pruning consists of two major categories: unstructured and structured pruning, where unstructured pruning constantly performs better. However, unstructured pruning presents a structured pattern at high pruning rates, which limits its performance. To this end, we propose a Rank-based PruninG (RPG) method to maintain the ranks of sparse weights in an adversarial manner. In each step, we minimize the low-rank approximation error for the weight matrices using singular value decomposition, and maximize their distance by pushing the weight matrices away from its low rank approximation. This rank-based optimization objective guides sparse weights towards a high-rank topology. The proposed method is conducted in a gradual pruning fashion to stabilize the change of rank during training. Experimental results on various datasets and different tasks demonstrate the effectiveness of our algorithm in high sparsity. The proposed RPG outperforms the state-of-the-art performance by 1.13% top-1 accuracy on ImageNet in ResNet-50 with 98% sparsity. The codes are available at https://github.com/huawei-noah/Efficient-Computing/tree/master/Pruning/RPG and https://gitee.com/mindspore/models/tree/master/research/cv/RPG.Comment: NeurIPS 2023 Accepte

    One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation

    Full text link
    Knowledge distillation~(KD) has proven to be a highly effective approach for enhancing model performance through a teacher-student training scheme. However, most existing distillation methods are designed under the assumption that the teacher and student models belong to the same model family, particularly the hint-based approaches. By using centered kernel alignment (CKA) to compare the learned features between heterogeneous teacher and student models, we observe significant feature divergence. This divergence illustrates the ineffectiveness of previous hint-based methods in cross-architecture distillation. To tackle the challenge in distilling heterogeneous models, we propose a simple yet effective one-for-all KD framework called OFA-KD, which significantly improves the distillation performance between heterogeneous architectures. Specifically, we project intermediate features into an aligned latent space such as the logits space, where architecture-specific information is discarded. Additionally, we introduce an adaptive target enhancement scheme to prevent the student from being disturbed by irrelevant information. Extensive experiments with various architectures, including CNN, Transformer, and MLP, demonstrate the superiority of our OFA-KD framework in enabling distillation between heterogeneous architectures. Specifically, when equipped with our OFA-KD, the student models achieve notable performance improvements, with a maximum gain of 8.0% on the CIFAR-100 dataset and 0.7% on the ImageNet-1K dataset. PyTorch code and checkpoints can be found at https://github.com/Hao840/OFAKD
    • …
    corecore