217 research outputs found

    移動ロボットの為の意識モデルと顔記憶機能に基づくCPU・バッテリの有効利用と親近感の向上

    Get PDF
    筑波大学修士(情報学)学位論文・平成31年3月25日授与(41292号

    Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits

    Full text link
    We consider the adversarial linear contextual bandit problem, where the loss vectors are selected fully adversarially and the per-round action set (i.e. the context) is drawn from a fixed distribution. Existing methods for this problem either require access to a simulator to generate free i.i.d. contexts, achieve a sub-optimal regret no better than O~(T56)\widetilde{O}(T^{\frac{5}{6}}), or are computationally inefficient. We greatly improve these results by achieving a regret of O~(T)\widetilde{O}(\sqrt{T}) without a simulator, while maintaining computational efficiency when the action set in each round is small. In the special case of sleeping bandits with adversarial loss and stochastic arm availability, our result answers affirmatively the open question by Saha et al. [2020] on whether there exists a polynomial-time algorithm with poly(d)Tpoly(d)\sqrt{T} regret. Our approach naturally handles the case where the loss is linear up to an additive misspecification error, and our regret shows near-optimal dependence on the magnitude of the error

    Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback

    Full text link
    We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback, without prior knowledge on transitions or access to simulators. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, ensures a regret of O~(K)\widetilde{\mathcal{O}}\left(\sqrt{K}\right), where KK is the number of episodes. This is the first result with the optimal KK dependence in the considered setting. The second algorithm, which is based on the policy optimization framework, guarantees a regret of O~(K34)\widetilde{\mathcal{O}}\left(K^{\frac{3}{4}} \right) and is computationally efficient. Both our results significantly improve over the state-of-the-art: a computationally inefficient algorithm by Kong et al. [2023] with O~(K45+poly(1λmin))\widetilde{\mathcal{O}}\left(K^{\frac{4}{5}}+poly\left(\frac{1}{\lambda_{\min}}\right) \right) regret, for some problem-dependent constant λmin\lambda_{\min} that can be arbitrarily close to zero, and a computationally efficient algorithm by Sherman et al. [2023b] with O~(K67)\widetilde{\mathcal{O}}\left(K^{\frac{6}{7}} \right) regret

    An End-to-End Multi-Task Learning to Link Framework for Emotion-Cause Pair Extraction

    Full text link
    Emotion-cause pair extraction (ECPE), as an emergent natural language processing task, aims at jointly investigating emotions and their underlying causes in documents. It extends the previous emotion cause extraction (ECE) task, yet without requiring a set of pre-given emotion clauses as in ECE. Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes. Such pipeline method, while intuitive, suffers from two critical issues, including error propagation across stages that may hinder the effectiveness, and high computational cost that would limit the practical application of the method. To tackle these issues, we propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner. Specifically, our model regards pair extraction as a link prediction task, and learns to link from emotion clauses to cause clauses, i.e., the links are directional. Emotion extraction and cause extraction are incorporated into the model as auxiliary tasks, which further boost the pair extraction. Experiments are conducted on an ECPE benchmarking dataset. The results show that our proposed model outperforms a range of state-of-the-art approaches.Comment: 7 pages, 3 figures, 5 table

    Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes

    Full text link
    The task of deepfakes detection is far from being solved by speech or vision researchers. Several publicly available databases of fake synthetic video and speech were built to aid the development of detection methods. However, existing databases typically focus on visual or voice modalities and provide no proof that their deepfakes can in fact impersonate any real person. In this paper, we present the first realistic audio-visual database of deepfakes SWAN-DF, where lips and speech are well synchronized and video have high visual and audio qualities. We took the publicly available SWAN dataset of real videos with different identities to create audio-visual deepfakes using several models from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC, YourTTS, and FreeVC models for voice conversion. From the publicly available speech dataset LibriTTS, we also created a separate database of only audio deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS, Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to the synthetic voices. Similarly, we tested face recognition system based on the MobileFaceNet architecture to several variants of our visual deepfakes. The vulnerability assessment show that by tuning the existing pretrained deepfake models to specific identities, one can successfully spoof the face and speaker recognition systems in more than 90% of the time and achieve a very realistic looking and sounding fake video of a given person.Comment: 10 pages, 3 figures, 3 table

    Effects of aging and macrophages on mice stem Leydig cell proliferation and differentiation in vitro

    Get PDF
    BackgroundTestosterone plays a critical role in maintaining reproductive functions and well-beings of the males. Adult testicular Leydig cells (LCs) produce testosterone and are generated from stem Leydig cells (SLCs) during puberty through adulthood. In addition, macrophages are critical in the SLC regulatory niche for normal testicular function. Age-related reduction in serum testosterone contributes to a number of metabolic and quality-of-life changes in males, as well as age-related changes in immunological functions. How aging and testicular macrophages may affect SLC function is still unclear.MethodsSLCs and macrophages were purified from adult and aged mice via FACS using CD51 as a marker protein. The sorted cells were first characterized and then co-cultured in vitro to examine how aging and macrophages may affect SLC proliferation and differentiation. To elucidate specific aging effects on both cell types, co-culture of sorted SLCs and macrophages were also carried out across two ages.ResultsCD51+ (weakly positive) and CD51++ (strongly positive) cells expressed typical SLC and macrophage markers, respectively. However, with aging, both cell types increased expression of multiple cytokine genes, such as IL-1b, IL-6 and IL-8. Moreover, old CD51+ SLCs reduced their proliferation and differentiation, with a more significant reduction in differentiation (2X) than proliferation (30%). Age matched CD51++ macrophages inhibited CD51+ SLC development, with a more significant reduction in old cells (60%) than young (40%). Crossed-age co-culture experiments indicated that the age of CD51+ SLCs plays a more significant role in determining age-related inhibitory effects. In LC lineage formation, CD51+ SLC had both reduced LC lineage markers and increased myoid cell lineage markers, suggesting an age-related lineage shift for SLCs.ConclusionThe results suggest that aging affected both SLC function and their regulatory niche cell, macrophages

    Exploration of Problems and Key Points in Database Design in Software Development

    Get PDF
    Starting from the necessity and principles of database design, this article explores the optimization issues. Firstly, analyze the necessity of database design, elaborating on effective management, maintainability, resource utilization, and running speed; Then, a series of issues in database management were discussed, such as user management, data object design specifications, and overall design ideas; Finally, the optimization issues such as normalization rules, inter table redundancy handling, query optimization, indexing, and transactions were elaborated in detail. In the software development lifecycle, database design is indispensable. Its role is not only to ensure the safety and reliability of data, but also to ensure the overall stability and speed of the system. Strengthening the rationality and optimization of design is the key to improving software quality

    Boosting Semi-Supervised Learning with Contrastive Complementary Labeling

    Full text link
    Semi-supervised learning (SSL) has achieved great success in leveraging a large amount of unlabeled data to learn a promising classifier. A popular approach is pseudo-labeling that generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Nevertheless, we highlight that these data with low-confidence pseudo labels can be still beneficial to the training process. Specifically, although the class with the highest probability in the prediction is unreliable, we can assume that this sample is very unlikely to belong to the classes with the lowest probabilities. In this way, these data can be also very informative if we can effectively exploit these complementary labels, i.e., the classes that a sample does not belong to. Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing methods. More critically, our CCL is particularly effective under the label-scarce settings. For example, we yield an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.Comment: typos corrected, 5 figures, 3 tables

    Temporal Interest Network for Click-Through Rate Prediction

    Full text link
    The history of user behaviors constitutes one of the most significant characteristics in predicting the click-through rate (CTR), owing to their strong semantic and temporal correlation with the target item. While the literature has individually examined each of these correlations, research has yet to analyze them in combination, that is, the quadruple correlation of (behavior semantics, target semantics, behavior temporal, and target temporal). The effect of this correlation on performance and the extent to which existing methods learn it remain unknown. To address this gap, we empirically measure the quadruple correlation and observe intuitive yet robust quadruple patterns. We measure the learned correlation of several representative user behavior methods, but to our surprise, none of them learn such a pattern, especially the temporal one. In this paper, we propose the Temporal Interest Network (TIN) to capture the quadruple semantic and temporal correlation between behaviors and the target. We achieve this by incorporating target-aware temporal encoding, in addition to semantic embedding, to represent behaviors and the target. Furthermore, we deploy target-aware attention, along with target-aware representation, to explicitly conduct the 4-way interaction. We performed comprehensive evaluations on the Amazon and Alibaba datasets. Our proposed TIN outperforms the best-performing baselines by 0.43\% and 0.29\% on two datasets, respectively. Comprehensive analysis and visualization show that TIN is indeed capable of learning the quadruple correlation effectively, while all existing methods fail to do so. We provide our implementation of TIN in Tensorflow
    corecore