57,980 research outputs found

    How Inpatriates Internalize Corporate Values at Headquarters: The Role of Developmental Job Assignments and Psychosocial Mentoring

    Get PDF
    Multinational companies (MNCs) often invite foreign subsidiary employees or inpatriates to their headquarters (HQ) to internalize the MNCs’ corporate values and transfer those values to their subsidiaries after repatriation. However, there is a lack of understanding about how and why inpatriates internalize these corporate values during their HQ experiences. By integrating the perspectives of international adjustment and organizational socialization with that of on-the-job learning, we develop a model wherein the job-related and psychosocial factors that inpatriates encounter at HQ promote their internalization of corporate values. Using a sample of 110 foreign subsidiary employee–supervisor dyads from the HQ of a Japanese MNC to which the employees were assigned as inpatriates, we found that developmental job assignments and psychosocial mentoring during inpatriation influenced the internalization of corporate values, which was partially and sequentially mediated by proactive socialization behavior and organizational identification. This study’s findings have significant implications for the theory and practice of inpatriation management, particularly with regard to how MNCs promote the internalization of corporate values among inpatriates

    Bridging High-Quality Audio and Video via Language for Sound Effects Retrieval from Visual Queries

    Full text link
    Finding the right sound effects (SFX) to match moments in a video is a difficult and time-consuming task, and relies heavily on the quality and completeness of text metadata. Retrieving high-quality (HQ) SFX using a video frame directly as the query is an attractive alternative, removing the reliance on text metadata and providing a low barrier to entry for non-experts. Due to the lack of HQ audio-visual training data, previous work on audio-visual retrieval relies on YouTube (in-the-wild) videos of varied quality for training, where the audio is often noisy and the video of amateur quality. As such it is unclear whether these systems would generalize to the task of matching HQ audio to production-quality video. To address this, we propose a multimodal framework for recommending HQ SFX given a video frame by (1) leveraging large language models and foundational vision-language models to bridge HQ audio and video to create audio-visual pairs, resulting in a highly scalable automatic audio-visual data curation pipeline; and (2) using pre-trained audio and visual encoders to train a contrastive learning-based retrieval system. We show that our system, trained using our automatic data curation pipeline, significantly outperforms baselines trained on in-the-wild data on the task of HQ SFX retrieval for video. Furthermore, while the baselines fail to generalize to this task, our system generalizes well from clean to in-the-wild data, outperforming the baselines on a dataset of YouTube videos despite only being trained on the HQ audio-visual pairs. A user study confirms that people prefer SFX retrieved by our system over the baseline 67% of the time both for HQ and in-the-wild data. Finally, we present ablations to determine the impact of model and data pipeline design choices on downstream retrieval performance. Please visit our project website to listen to and view our SFX retrieval results.Comment: WASPAA 2023. Project page: https://juliawilkins.github.io/sound-effects-retrieval-from-video/. 4 pages, 2 figures, 2 table

    Hessian-aware Quantized Node Embeddings for Recommendation

    Full text link
    Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance

    Inverse and forward modeling tools for biophotonic data

    Get PDF
    Biophotonic data require specific treatments due to the difficulty of directly extracting information from them. Therefore, artificial intelligence tools including machine learning and deep learning brought into play. These tools can be grouped into inverse modeling, preprocessing and data modeling categories. In each of these three categories, one research question was investigated. First, the aim was to develop a method that can acquire the Raman-like spectra from coherent anti-Stokes Raman scattering (CARS) spectra without apriori knowledge. In general, CARS spectra suffer from the non-resonant background (NRB) contribution, and existing methods were commonly implemented to remove it. However, these methods were not able to completely remove the NRB and need additional preprocessing afterward. Therefore, deep learning via the long-short-term memory network was applied and outperformed these existing methods. Then, a denoising technique via deep learning was developed for reconstructing high-quality (HQ) multimodal images (MM) from low-quality (LQ) ones. Since the measurement of HQ MM images is time-consuming, which is impractical for clinical applications, we developed a network, namely incSRCNN, to directly predict HQ images using only LQ ones. This network shows better performance when compared with standard methods. Finally, we intended to improve the accuracy of the classification model in particular when LQ Raman data or Raman data with varying quality are obtained. Therefore, a novel method based on functional data analysis was implemented, which converts the Raman data into functions and then applies functional dimension reduction followed by a classification method. The results showed better performance for the functional approach in comparison with the classical method

    Achieving Orbit

    Get PDF
    In this Engineering Design Challenge activity, participants use balloons to investigate how a two-stage rocket, like that used in the IBEX mission, can propel a satellite to a specific orbit. Participants will construct a two-stage balloon that will be required to reach a particular location on the balloon track, simulating the proper orbit to be reached by the IBEX satellite. This activity is adapted from the NASA Rockets Educators Guide (EG-2003-01-108-HQ) and the NASA Glenn Research Center’s online Learning Technologies Project for facilitation with an informal museum audience. Each short activity/product helps to build awareness and engagement in the science and engineering aspects of the mission that is reinforced as visitors choose to participate in more activities, including viewing the planetarium show and mission Web site. Educational levels: Informal education, General public
    corecore