495 research outputs found

    Markov Model of Word-of-Mouth Effect and Stock Market Participation

    Get PDF
    The question of determinants of participation of stock market has long been a central question to financial economists. Most notably, Hong, Kubik,and Stein (2001) argue that social interactions affects the investment decision of potential stock market investors through two popular channels: word-of-mouth and pleasure-in-talk about stock market. In this paper, I extend Hong et al.s model of social interactions to incorporate different effects of these two channels on stock market participation, conditioning on current market situation. The idea is intuitive: When potential investors observe current bull (bear) market, word-of-mouth and pleasure-in-talk effect would work positively (negatively) toward stock market participation due to increased number of peers who benefitted (lost their wealth) from bull (bear) market situation. In Markov chain process framework, I model stock market participation depending on current market situation and discuss empirical implications of my model

    Learning Optical Flow, Depth, and Scene Flow without Real-World Labels

    Full text link
    Self-supervised monocular depth estimation enables robots to learn 3D perception from raw video streams. This scalable approach leverages projective geometry and ego-motion to learn via view synthesis, assuming the world is mostly static. Dynamic scenes, which are common in autonomous driving and human-robot interaction, violate this assumption. Therefore, they require modeling dynamic objects explicitly, for instance via estimating pixel-wise 3D motion, i.e. scene flow. However, the simultaneous self-supervised learning of depth and scene flow is ill-posed, as there are infinitely many combinations that result in the same 3D point. In this paper we propose DRAFT, a new method capable of jointly learning depth, optical flow, and scene flow by combining synthetic data with geometric self-supervision. Building upon the RAFT architecture, we learn optical flow as an intermediate task to bootstrap depth and scene flow learning via triangulation. Our algorithm also leverages temporal and geometric consistency losses across tasks to improve multi-task learning. Our DRAFT architecture simultaneously establishes a new state of the art in all three tasks in the self-supervised monocular setting on the standard KITTI benchmark. Project page: https://sites.google.com/tri.global/draft.Comment: Accepted to RA-L + ICRA 202

    Regulation of Skp2 Expression and Activity and Its Role in Cancer Progression

    Get PDF
    The regulation of cell cycle entry is critical for cell proliferation and tumorigenesis. One of the key players regulating cell cycle progression is the F-box protein Skp2. Skp2 forms a SCF complex with Skp1, Cul-1, and Rbx1 to constitute E3 ligase through its F-box domain. Skp2 protein levels are regulated during the cell cycle, and recent studies reveal that Skp2 stability, subcellular localization, and activity are regulated by its phosphorylation. Overexpression of Skp2 is associated with a variety of human cancers, indicating that Skp2 may contribute to the development of human cancers. The notion is supported by various genetic mouse models that demonstrate an oncogenic activity of Skp2 and its requirement in cancer progression, suggesting that Skp2 may be a novel and attractive therapeutic target for cancers

    Disentangling Human Dynamics for Pedestrian Locomotion Forecasting with Noisy Supervision

    Full text link
    We tackle the problem of Human Locomotion Forecasting, a task for jointly predicting the spatial positions of several keypoints on the human body in the near future under an egocentric setting. In contrast to the previous work that aims to solve either the task of pose prediction or trajectory forecasting in isolation, we propose a framework to unify the two problems and address the practically useful task of pedestrian locomotion prediction in the wild. Among the major challenges in solving this task is the scarcity of annotated egocentric video datasets with dense annotations for pose, depth, or egomotion. To surmount this difficulty, we use state-of-the-art models to generate (noisy) annotations and propose robust forecasting models that can learn from this noisy supervision. We present a method to disentangle the overall pedestrian motion into easier to learn subparts by utilizing a pose completion and a decomposition module. The completion module fills in the missing key-point annotations and the decomposition module breaks the cleaned locomotion down to global (trajectory) and local (pose keypoint movements). Further, with Quasi RNN as our backbone, we propose a novel hierarchical trajectory forecasting network that utilizes low-level vision domain specific signals like egomotion and depth to predict the global trajectory. Our method leads to state-of-the-art results for the prediction of human locomotion in the egocentric view. Project pade: https://karttikeya.github.io/publication/plf/Comment: Accepted to WACV 2020 (Oral
    corecore