100 research outputs found

    Andeta

    Get PDF
    Andeta is an interactive novel created to explore the proper balance between story depth and player freedom when gamifying fiction. By putting the reader inside of a short story and letting the reader decide how they want to act when presented with various situations, Andeta changes and reacts accordingly to the reader’s decisions

    The temporary and typological potential for new interaction and publicness

    Get PDF
    As a portable typology, pop-up stores appeared in China in 2006 and quickly spread to increase opportunities for meeting up and growing fashion and brand activities. Most are located in the interior of shopping centres or on streets with retail shops. However, they are generally not considered as public spaces or as positive urban elements. Residents and urban agencies are inclined to perceive them to be temporary and purely functional places that serve commercial interests and lack social and spatial possibilities. This research, first, aims to provide a classification of the typologies of pop-up stores through a literature review and a field survey in Shanghai. Through the observation and documentation of people's behaviours in selected pop-up stores, the research explores whether such shops’ temporality, their limitations regarding times of operation, and their spatial configurations have affected people's interactions and activities. We argue that they concurrently offers a new sense of publicness among people immersing themselves in these spaces and places inside and outside pop-up stores depending on their location. In addition, it innovates and advances the understanding of these portable commercial areas by considering their social dimensions and relation to the larger context. This research further investigates how the temporality and flexible needs of the spaces have influenced their design. By studying Shanghai's pop-up stores as representational, the study aims to shed light on the design strategies of retail pop-up stores to strengthen the positive impact of new publicness brought by such innovative temporary public spaces

    Language Model Pre-Training with Sparse Latent Typing

    Full text link
    Modern large-scale Pre-trained Language Models (PLMs) have achieved tremendous success on a wide range of downstream tasks. However, most of the LM pre-training objectives only focus on text reconstruction, but have not sought to learn latent-level interpretable representations of sentences. In this paper, we manage to push the language models to obtain a deeper understanding of sentences by proposing a new pre-training objective, Sparse Latent Typing, which enables the model to sparsely extract sentence-level keywords with diverse latent types. Experimental results show that our model is able to learn interpretable latent type categories in a self-supervised manner without using any external knowledge. Besides, the language model pre-trained with such an objective also significantly improves Information Extraction related downstream tasks in both supervised and few-shot settings. Our code is publicly available at: https://github.com/renll/SparseLT.Comment: EMNLP 2022 (Oral

    Exploring Structured Semantic Prior for Multi Label Recognition with Incomplete Labels

    Full text link
    Multi-label recognition (MLR) with incomplete labels is very challenging. Recent works strive to explore the image-to-label correspondence in the vision-language model, \ie, CLIP, to compensate for insufficient annotations. In spite of promising performance, they generally overlook the valuable prior about the label-to-label correspondence. In this paper, we advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior about the label-to-label correspondence via a semantic prior prompter. We then present a novel Semantic Correspondence Prompt Network (SCPNet), which can thoroughly explore the structured semantic prior. A Prior-Enhanced Self-Supervised Learning method is further introduced to enhance the use of the prior. Comprehensive experiments and analyses on several widely used benchmark datasets show that our method significantly outperforms existing methods on all datasets, well demonstrating the effectiveness and the superiority of our method. Our code will be available at https://github.com/jameslahm/SCPNet.Comment: Accepted by IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 202

    DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient Transformer Accelerator

    Full text link
    The wide adoption and significant computing resource consumption of attention-based Transformers, e.g., Vision Transformer and large language models, have driven the demands for efficient hardware accelerators. While electronic accelerators have been commonly used, there is a growing interest in exploring photonics as an alternative technology due to its high energy efficiency and ultra-fast processing speed. Optical neural networks (ONNs) have demonstrated promising results for convolutional neural network (CNN) workloads that only require weight-static linear operations. However, they fail to efficiently support Transformer architectures with attention operations due to the lack of ability to process dynamic full-range tensor multiplication. In this work, we propose a customized high-performance and energy-efficient photonic Transformer accelerator, DOTA. To overcome the fundamental limitation of existing ONNs, we introduce a novel photonic tensor core, consisting of a crossbar array of interference-based optical vector dot-product engines, that supports highly-parallel, dynamic, and full-range matrix-matrix multiplication. Our comprehensive evaluation demonstrates that DOTA achieves a >4x energy and a >10x latency reduction compared to prior photonic accelerators, and delivers over 20x energy reduction and 2 to 3 orders of magnitude lower latency compared to the electronic Transformer accelerator. Our work highlights the immense potential of photonic computing for efficient hardware accelerators, particularly for advanced machine learning workloads.Comment: The short version is accepted by Next-Gen AI System Workshop at MLSys 202

    Utility of chest CT in diagnosis of COVID-19 pneumonia

    Get PDF
    PURPOSEWe aimed to explore the imaging findings of computed tomography (CT) in diagnosing coronavirus disease 2019 (COVID-19) and its clinical value for further evaluation of suspected cases.METHODSFiles of 155 patients visiting the fever clinics at our hospital and affiliated hospitals from January 20th to February 9th, 2020 were searched. Among them, 140 cases (including 82 males and 58 females) were included as suspected COVID-19 cases based on clinical and epidemiological history; the CT image features of 70 cases with suggestive findings on CT, confirmed by positive nucleic acid test were analyzed and evaluated. The sensitivity and specificity of CT in diagnosing COVID-19 were evaluated in patients with epidemiological history.RESULTSOf the 70 patients, 84.3% showed bilateral lung involvement on CT; 27 cases (38.6%) showed ground-glass opacity (GGO), which was mostly distributed in the subpleural area (55.7%), and this sign was mainly observed in early COVID-19 patients. In addition, 41 cases (58.6%) manifested GGO combined with focal consolidation opacity, 2 (2.8%) had flake-like consolidation opacity, with involvements of the periphery of lung field and the central zone (44.3%), and this sign was mostly observed in severe or critical patients. Concomitant signs such as pleural effusion and mediastinal lymph node enlargement were rare. Among patients with epidemiological history, the sensitivity of CT in diagnosing COVID-19 was 89.7% (70/78), and the specificity was 88.7% (55/62).CONCLUSIONCT shows high sensitivity and specificity in diagnosing COVID-19. CT is an important examination method in evaluation of suspected cases and assessment of disease severity

    YOLOv8-ACU: improved YOLOv8-pose for facial acupoint detection

    Get PDF
    IntroductionAcupoint localization is integral to Traditional Chinese Medicine (TCM) acupuncture diagnosis and treatment. Employing intelligent detection models for recognizing facial acupoints can substantially enhance localization accuracy.MethodsThis study introduces an advancement in the YOLOv8-pose keypoint detection algorithm, tailored for facial acupoints, and named YOLOv8-ACU. This model enhances acupoint feature extraction by integrating ECA attention, replaces the original neck module with a lighter Slim-neck module, and improves the loss function for GIoU.ResultsThe YOLOv8-ACU model achieves impressive accuracy, with an [email protected] of 97.5% and an [email protected]–0.95 of 76.9% on our self-constructed datasets. It also marks a reduction in model parameters by 0.44M, model size by 0.82 MB, and GFLOPs by 9.3%.DiscussionWith its enhanced recognition accuracy and efficiency, along with good generalization ability, YOLOv8-ACU provides significant reference value for facial acupoint localization and detection. This is particularly beneficial for Chinese medicine practitioners engaged in facial acupoint research and intelligent detection
    • …
    corecore