506 research outputs found

    Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks

    Full text link
    Modeling and forecasting forward citations to a patent is a central task for the discovery of emerging technologies and for measuring the pulse of inventive progress. Conventional methods for forecasting these forward citations cast the problem as analysis of temporal point processes which rely on the conditional intensity of previously received citations. Recent approaches model the conditional intensity as a chain of recurrent neural networks to capture memory dependency in hopes of reducing the restrictions of the parametric form of the intensity function. For the problem of patent citations, we observe that forecasting a patent's chain of citations benefits from not only the patent's history itself but also from the historical citations of assignees and inventors associated with that patent. In this paper, we propose a sequence-to-sequence model which employs an attention-of-attention mechanism to capture the dependencies of these multiple time sequences. Furthermore, the proposed model is able to forecast both the timestamp and the category of a patent's next citation. Extensive experiments on a large patent citation dataset collected from USPTO demonstrate that the proposed model outperforms state-of-the-art models at forward citation forecasting

    Detecting Female Students Transforming Entrepreneurial Competency, Mindset, and Intention into Sustainable Entrepreneurship

    Get PDF
    Entrepreneurship has been viewed as an opportunity for economic development and changing economic scenario in global markets. Women are viewed as a reservoir of entrepreneurial talents, so they can be growth engines in novel markets. Previous studies have considered entrepreneurship as the most effective way towards the economic empowerment of women. Female students engaged in entrepreneurial education have been addressed persistently, while what transforms them in an education process is still unclear. Considering the transforming global economy and its influence on higher education, this study aims to detect female students transforming entrepreneurial competency, mindset, and intention into sustainable entrepreneurship. Using a self-compiled survey, we targeted 752 female students to investigate their entrepreneurial competency, mindset, and intention. SPSS and AMOS were used to transform the data for interpretation. We assumed that the impact of female student’s entrepreneurial competency could be modified by an entrepreneurial mindset and result in entrepreneurial intention. To detect this causal relationship, this study employed reliability, factor, structural equation modeling (SEM), and bootstrapping analyses to verify the evidence. The result of the SEM confirms that the female students’ entrepreneurial competency will, through their entrepreneurial mindset, impact entrepreneurial intention. With bootstrapping, 5000 samples were collected, and it was demonstrated that the measure constructs were still reliable in the model. This study found that there is a mediation effect between entrepreneurial competency and entrepreneurial intention. The entrepreneurial mindset plays a crucial role in the transformation process. Without an entrepreneurial mindset, entrepreneurial competency cannot exert a significant effect on entrepreneurial intention. The findings can help reinvent related entrepreneurial education in higher education

    LLM4TS: Two-Stage Fine-Tuning for Time-Series Forecasting with Pre-Trained LLMs

    Full text link
    In this work, we leverage pre-trained Large Language Models (LLMs) to enhance time-series forecasting. Mirroring the growing interest in unifying models for Natural Language Processing and Computer Vision, we envision creating an analogous model for long-term time-series forecasting. Due to limited large-scale time-series data for building robust foundation models, our approach LLM4TS focuses on leveraging the strengths of pre-trained LLMs. By combining time-series patching with temporal encoding, we have enhanced the capability of LLMs to handle time-series data effectively. Inspired by the supervised fine-tuning in chatbot domains, we prioritize a two-stage fine-tuning process: first conducting supervised fine-tuning to orient the LLM towards time-series data, followed by task-specific downstream fine-tuning. Furthermore, to unlock the flexibility of pre-trained LLMs without extensive parameter adjustments, we adopt several Parameter-Efficient Fine-Tuning (PEFT) techniques. Drawing on these innovations, LLM4TS has yielded state-of-the-art results in long-term forecasting. Our model has also shown exceptional capabilities as both a robust representation learner and an effective few-shot learner, thanks to the knowledge transferred from the pre-trained LLM

    A Hierarchical Context-aware Modeling Approach for Multi-aspect and Multi-granular Pronunciation Assessment

    Full text link
    Automatic Pronunciation Assessment (APA) plays a vital role in Computer-assisted Pronunciation Training (CAPT) when evaluating a second language (L2) learner's speaking proficiency. However, an apparent downside of most de facto methods is that they parallelize the modeling process throughout different speech granularities without accounting for the hierarchical and local contextual relationships among them. In light of this, a novel hierarchical approach is proposed in this paper for multi-aspect and multi-granular APA. Specifically, we first introduce the notion of sup-phonemes to explore more subtle semantic traits of L2 speakers. Second, a depth-wise separable convolution layer is exploited to better encapsulate the local context cues at the sub-word level. Finally, we use a score-restraint attention pooling mechanism to predict the sentence-level scores and optimize the component models with a multitask learning (MTL) framework. Extensive experiments carried out on a publicly-available benchmark dataset, viz. speechocean762, demonstrate the efficacy of our approach in relation to some cutting-edge baselines.Comment: Accepted to Interspeech 202

    Power-efficient memory bus encoding using stride-based stream reconstruction

    Get PDF
    With the rapid increase in the complexity of chips and the popularity of portable devices, the performance demand is not any more the only important constraint in the embedded system. In stead, energy consumption has become one of the main design issues for contemporary embedded systems, especially for I/O interface due to the high capacitance of bus transition. In this paper, we propose a bus encoding scheme, which may reduce transitions by reconstructing active address streams with variable cached strides. The key idea is to obtain the variable strides for dierent sets of active addressing streams such that the decoder reconstructs these interlaced streams with these strides. Instead of sending the full address, the encoder may only send partial ad- dress or stride by using either one-hot or binary-inversion encoding. To exploit the locality and dynamically adjust the value of stride of active address streams, we partially compare the previous addresses of existing streams with the current address. Hence, the data transmitted on the bus can be minimally encoded. Experiments with several MediaBench benchmarks show that the scheme can achieve an average of 60% reduction in bus switching activity.Facultad de Informátic
    • …
    corecore