84 research outputs found

    Predictions of Entropy Reduction Theory on Chinese Relative Clauses

    Get PDF
    Processing difficulty in Chinese has been a challenge for theories of sentence parsing due to its lacking morphology and structural ambiguity. I use entropy reduction (ER) theory, based on a minimalist analysis of relative clauses (RCs), and tree-banks-based probabilistic grammar to make predictions on Chinese RCs (with various amounts of temporary ambiguity). The predictions match the results from the previous human experiments on Chinese. This provides supporting evidence to the reliability of ER theory and highlight the ability of the theory to deal with ambiguous sentences with the right grammar and analysis

    An Acoustic Phonetic Portfolio of a Chinese-Accented English Idiolect

    Get PDF
    This acoustic portfolio contains four sections, including nine voice data analysis projects. The first section represents my pronunciation of English using the International Phonetic Alphabet (IPA). The second section describes the spectrogram data parsing of the vowels and consonants as I pronounce them. The third section focuses on acoustic correlates that I use to express lexical stress on homographic and multi-syllabic words. The fourth and final section investigates various phonological rules that apply in my pronunciation of the word . Praat and NORM are the two acoustic computer software programs used in this study

    Adverbial Phrase Placements in L1-Chinese ESL Learners\u27 Writing

    Get PDF
    The authors of this study looked at 49 samples of English writing from 14 Chinese students enrolled in College ESL. Adverbial placements were recorded, categorized, and analyzed. Although some types of adverbials have different positioning in Chinese than in English, the authors found that this was not a primary factor in the students’ English writing. Students greatly favored sentence-initial and post-verbal placement of adverbs and made extensive use of modality adverbs. Reasons for their placements and usage are possibly: ease of transfer, explicit instruction, and a perception of prestige

    SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction

    Full text link
    3D occupancy prediction is an important task for the robustness of vision-centric autonomous driving, which aims to predict whether each point is occupied in the surrounding 3D space. Existing methods usually require 3D occupancy labels to produce meaningful results. However, it is very laborious to annotate the occupancy status of each voxel. In this paper, we propose SelfOcc to explore a self-supervised way to learn 3D occupancy using only video sequences. We first transform the images into the 3D space (e.g., bird's eye view) to obtain 3D representation of the scene. We directly impose constraints on the 3D representations by treating them as signed distance fields. We can then render 2D images of previous and future frames as self-supervision signals to learn the 3D representations. We propose an MVS-embedded strategy to directly optimize the SDF-induced weights with multiple depth proposals. Our SelfOcc outperforms the previous best method SceneRF by 58.7% using a single frame as input on SemanticKITTI and is the first self-supervised work that produces reasonable 3D occupancy for surround cameras on nuScenes. SelfOcc produces high-quality depth and achieves state-of-the-art results on novel depth synthesis, monocular depth estimation, and surround-view depth estimation on the SemanticKITTI, KITTI-2015, and nuScenes, respectively. Code: https://github.com/huang-yh/SelfOcc.Comment: Code is available at: https://github.com/huang-yh/SelfOc

    Embedding, covert movement, and intervention in Kathmandu Newari

    Get PDF
    In this paper, we explore the syntax of wh-dependencies in Newari (Sino-Tibetan). We examine the patterns of intervention and island effects in wh-in-situ configurations, and we find that sensitivity to these constraints often co-occur. We thus argue that Newari permits wh-operators to either covertly move to fix their scope, or may take scope in-situ via focus alternative composition analysis. Additionally, we argue that clausal complements to verbs (“verbal argument CPs”) may be islands for covert movement in this language

    Exploring Unified Perspective For Fast Shapley Value Estimation

    Full text link
    Shapley values have emerged as a widely accepted and trustworthy tool, grounded in theoretical axioms, for addressing challenges posed by black-box models like deep neural networks. However, computing Shapley values encounters exponential complexity in the number of features. Various approaches, including ApproSemivalue, KernelSHAP, and FastSHAP, have been explored to expedite the computation. We analyze the consistency of existing works and conclude that stochastic estimators can be unified as the linear transformation of importance sampling of feature subsets. Based on this, we investigate the possibility of designing simple amortized estimators and propose a straightforward and efficient one, SimSHAP, by eliminating redundant techniques. Extensive experiments conducted on tabular and image datasets validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values

    OccWorld: Learning a 3D Occupancy World Model for Autonomous Driving

    Full text link
    Understanding how the 3D scene evolves is vital for making decisions in autonomous driving. Most existing methods achieve this by predicting the movements of object boxes, which cannot capture more fine-grained scene information. In this paper, we explore a new framework of learning a world model, OccWorld, in the 3D Occupancy space to simultaneously predict the movement of the ego car and the evolution of the surrounding scenes. We propose to learn a world model based on 3D occupancy rather than 3D bounding boxes and segmentation maps for three reasons: 1) expressiveness. 3D occupancy can describe the more fine-grained 3D structure of the scene; 2) efficiency. 3D occupancy is more economical to obtain (e.g., from sparse LiDAR points). 3) versatility. 3D occupancy can adapt to both vision and LiDAR. To facilitate the modeling of the world evolution, we learn a reconstruction-based scene tokenizer on the 3D occupancy to obtain discrete scene tokens to describe the surrounding scenes. We then adopt a GPT-like spatial-temporal generative transformer to generate subsequent scene and ego tokens to decode the future occupancy and ego trajectory. Extensive experiments on the widely used nuScenes benchmark demonstrate the ability of OccWorld to effectively model the evolution of the driving scenes. OccWorld also produces competitive planning results without using instance and map supervision. Code: https://github.com/wzzheng/OccWorld.Comment: Code is available at: https://github.com/wzzheng/OccWorl

    Temporal Knowledge Graph Completion: A Survey

    Full text link
    Knowledge graph completion (KGC) can predict missing links and is crucial for real-world knowledge graphs, which widely suffer from incompleteness. KGC methods assume a knowledge graph is static, but that may lead to inaccurate prediction results because many facts in the knowledge graphs change over time. Recently, emerging methods have shown improved predictive results by further incorporating the timestamps of facts; namely, temporal knowledge graph completion (TKGC). With this temporal information, TKGC methods can learn the dynamic evolution of the knowledge graph that KGC methods fail to capture. In this paper, for the first time, we summarize the recent advances in TKGC research. First, we detail the background of TKGC, including the problem definition, benchmark datasets, and evaluation metrics. Then, we summarize existing TKGC methods based on how timestamps of facts are used to capture the temporal dynamics. Finally, we conclude the paper and present future research directions of TKGC

    Analysis on Influencing Factors of Adolescents’ Physical Activity from the Perspective of Social Cognitive Theory

    Get PDF
    Abstract: Objective: To make a systematic review and meta-analysis of the literature on the influencing factors of adolescent physical activity from the perspective of social cognitive theory (SCT) model. Methods: the databases at home and abroad were searched, and 18 literatures meeting the requirements were included. The effect quantities were combined by Stata 15.0 software and analyzed by subgroup. Results: (1) SCT model could predict physical activity in a moderate degree (R2 = 17%, P < 0.01, z = 7.59). (2) Meta-analysis of the literature including self-efficacy, barrier self-efficacy, social support and social status showed that these factors were significantly correlated with physical activity (N ≥ 75%). (3) Influenced by different regions, gender and statistical methods, there are heterogeneity among the research results. Conclusion: SCT model can predict adolescent physical activity to a moderate extent; self efficacy, barriers self-efficacy, social support and social status are the key indicators to predict physical activity; affected by different regions, gender and cultural environment, the prediction results of SCT model on adolescent physical activity are different

    From Wide to Deep: Dimension Lifting Network for Parameter-efficient Knowledge Graph Embedding

    Full text link
    Knowledge graph embedding (KGE) that maps entities and relations into vector representations is essential for downstream applications. Conventional KGE methods require high-dimensional representations to learn the complex structure of knowledge graph, but lead to oversized model parameters. Recent advances reduce parameters by low-dimensional entity representations, while developing techniques (e.g., knowledge distillation or reinvented representation forms) to compensate for reduced dimension. However, such operations introduce complicated computations and model designs that may not benefit large knowledge graphs. To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures. We view all entity representations as a single-layer embedding network, and conventional KGE methods that adopt high-dimensional entity representations equal widening the embedding network to gain expressiveness. To achieve parameter efficiency, we instead propose a deeper embedding network for entity representations, i.e., a narrow entity embedding layer plus a multi-layer dimension lifting network (LiftNet). Experiments on three public datasets show that by integrating LiftNet, four conventional KGE methods with 16-dimensional representations achieve comparable link prediction accuracy as original models that adopt 512-dimensional representations, saving 68.4% to 96.9% parameters
    • …
    corecore