276 research outputs found

    Credit Risk Measurement with Wrong Way Risk

    Get PDF
    I will start with introducing the corporate bond and several important components of it. The existing credit risk model can be categorized into two groups — Structural (Firm Value) Model and Reduced-form (Intensity-based) Models, followed by the risk measure and the risk measure—Value at Risk and its computation. Then I applied the previously introduced material to the given portfolio to calculate its credit VaR using two methods, S-critical and the Monte Carlo simulation. Finally, I present some advanced credit risk models with stochastic interest rate

    SAPI: Surroundings-Aware Vehicle Trajectory Prediction at Intersections

    Full text link
    In this work we propose a deep learning model, i.e., SAPI, to predict vehicle trajectories at intersections. SAPI uses an abstract way to represent and encode surrounding environment by utilizing information from real-time map, right-of-way, and surrounding traffic. The proposed model consists of two convolutional network (CNN) and recurrent neural network (RNN)-based encoders and one decoder. A refiner is proposed to conduct a look-back operation inside the model, in order to make full use of raw history trajectory information. We evaluate SAPI on a proprietary dataset collected in real-world intersections through autonomous vehicles. It is demonstrated that SAPI shows promising performance when predicting vehicle trajectories at intersection, and outperforms benchmark methods. The average displacement error(ADE) and final displacement error(FDE) for 6-second prediction are 1.84m and 4.32m respectively. We also show that the proposed model can accurately predict vehicle trajectories in different scenarios

    Capacity Constrained Influence Maximization in Social Networks

    Full text link
    Influence maximization (IM) aims to identify a small number of influential individuals to maximize the information spread and finds applications in various fields. It was first introduced in the context of viral marketing, where a company pays a few influencers to promote the product. However, apart from the cost factor, the capacity of individuals to consume content poses challenges for implementing IM in real-world scenarios. For example, players on online gaming platforms can only interact with a limited number of friends. In addition, we observe that in these scenarios, (i) the initial adopters of promotion are likely to be the friends of influencers rather than the influencers themselves, and (ii) existing IM solutions produce sub-par results with high computational demands. Motivated by these observations, we propose a new IM variant called capacity constrained influence maximization (CIM), which aims to select a limited number of influential friends for each initial adopter such that the promotion can reach more users. To solve CIM effectively, we design two greedy algorithms, MG-Greedy and RR-Greedy, ensuring the 1/21/2-approximation ratio. To improve the efficiency, we devise the scalable implementation named RR-OPIM+ with (1/2ϵ)(1/2-\epsilon)-approximation and near-linear running time. We extensively evaluate the performance of 9 approaches on 6 real-world networks, and our solutions outperform all competitors in terms of result quality and running time. Additionally, we deploy RR-OPIM+ to online game scenarios, which improves the baseline considerably.Comment: The technical report of the paper entitled 'Capacity Constrained Influence Maximization in Social Networks' in SIGKDD'2

    Understand Group Interaction and Cognitive State in Online Collaborative Problem Solving: Leveraging Brain-to-Brain Synchrony Data

    Get PDF
    The purpose of this study aimed to analyze the process of online collaborative problem solving (CPS) via brain-to-brain synchrony (BS) at the problem-understanding and problem-solving stages. Aiming to obtain additional insights than traditional approaches (survey and observation), BS refers to the synchronization of brain activity between two or more people, as an indicator of interpersonal interaction or common attention. Thirty-six undergraduate students participated. Results indicate the problem-understanding stage showed a higher level of BS than the problem-solving stage. Moreover, the level of BS at the problem-solving stage was significantly correlated with task performance. Groups with all high CPS skill students had the highest level of BS, while some of the mixed groups could achieve the same level of BS. BS is an effective indicator of CPS to group performance and individual interaction. Implications for the online CPS design and possible supports for the process of online CPS activity are also discussed

    All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment

    Full text link
    Current mainstream vision-language (VL) tracking framework consists of three parts, \ie a visual feature extractor, a language feature extractor, and a fusion model. To pursue better performance, a natural modus operandi for VL tracking is employing customized and heavier unimodal encoders, and multi-modal fusion models. Albeit effective, existing VL trackers separate feature extraction and feature integration, resulting in extracted features that lack semantic guidance and have limited target-aware capability in complex scenarios, \eg similar distractors and extreme illumination. In this work, inspired by the recent success of exploring foundation models with unified architecture for both natural language and computer vision tasks, we propose an All-in-One framework, which learns joint feature extraction and interaction by adopting a unified transformer backbone. Specifically, we mix raw vision and language signals to generate language-injected vision tokens, which we then concatenate before feeding into the unified backbone architecture. This approach achieves feature integration in a unified backbone, removing the need for carefully-designed fusion modules and resulting in a more effective and efficient VL tracking framework. To further improve the learning efficiency, we introduce a multi-modal alignment module based on cross-modal and intra-modal contrastive objectives, providing more reasonable representations for the unified All-in-One transformer backbone. Extensive experiments on five benchmarks, \ie OTB99-L, TNL2K, LaSOT, LaSOTExt_{\rm Ext} and WebUAV-3M, demonstrate the superiority of the proposed tracker against existing state-of-the-arts on VL tracking. Codes will be made publicly available.Comment: Work in progres

    EyelashNet: A Dataset and A Baseline Method for Eyelash Matting

    Get PDF
    Eyelashes play a crucial part in the human facial structure and largely affect the facial attractiveness in modern cosmetic design. However, the appearance and structure of eyelashes can easily induce severe artifacts in high-fidelity multi-view 3D face reconstruction. Unfortunately it is highly challenging to remove eyelashes from portrait images using both traditional and learning-based matting methods due to the delicate nature of eyelashes and the lack of eyelash matting dataset. To this end, we present EyelashNet, the first eyelash matting dataset which contains 5,400 high-quality eyelash matting data captured from real world and 5,272 virtual eyelash matting data created by rendering avatars. Our work consists of a capture stage and an inference stage to automatically capture and annotate eyelashes instead of tedious manual efforts. The capture is based on a specifically-designed fluorescent labeling system. By coloring the eyelashes with a safe and invisible fluorescent substance, our system takes paired photos with colored and normal eyelashes by turning the equipped ultraviolet (UVA) flash on and off. We further correct the alignment between each pair of photos and use a novel alpha matte inference network to extract the eyelash alpha matte. As there is no prior eyelash dataset, we propose a progressive training strategy that progressively fuses captured eyelash data with virtual eyelash data to learn the latent semantics of real eyelashes. As a result, our method can accurately extract eyelash alpha mattes from fuzzy and self-shadow regions such as pupils, which is almost impossible by manual annotations. To validate the advantage of EyelashNet, we present a baseline method based on deep learning that achieves state-of-the-art eyelash matting performance with RGB portrait images as input. We also demonstrate that our work can largely benefit important real applications including high-fidelity personalized avatar and cosmetic design
    corecore