36 research outputs found

    The Impact of Entrepreneurs\u27 Characteristics on the Performance of Venture Businesses

    Get PDF
    In South Korea, venture businesses play a key role in commercializing new technology and revitalizing the economy. The Korean government implements various policies and supporting programs directly. Evaluating the possibility of future growth and selecting good venture business is very important for the effectiveness and efficiency of government programs supporting venture businesses. There have been many studies aimed at finding the factors affecting the success of venture businesses, and entrepreneurs‟ characteristics are known as the major factor. In this study, the impact of entrepreneurs‟ characteristics on the performance of venture businesses is analyzed by using the survey data of 2,049 Korean venture businesses. Human capital and demographics, skill, and motivation of entrepreneurs are used as independent variables to measure the general and financial performance of venture businesses. The results of regression analysis show that education of entrepreneurs positively affects the size, innovativeness, and net sales of venture businesses. In contrast, entrepreneur‟s skills, such as entrepreneurial experience and working experience show a negative impact in general. Networking activity, however, shows a positive impact on the size and innovativeness of venture businesses. R&D activity shows positive impact only on the innovativeness, but significantly negative impact on the size and net sales of venture businesses, and external funding has a positive impact on all of indicators of the performance of venture businesses. These results suggest that we need to consider the several factors, such as education, networking, and external funding in the evaluation process, and promote networking and cooperation with others in order to increase the effectiveness and efficiency of programs

    A Large-Scale 3D Face Mesh Video Dataset via Neural Re-parameterized Optimization

    Full text link
    We propose NeuFace, a 3D face mesh pseudo annotation method on videos via neural re-parameterized optimization. Despite the huge progress in 3D face reconstruction methods, generating reliable 3D face labels for in-the-wild dynamic videos remains challenging. Using NeuFace optimization, we annotate the per-view/-frame accurate and consistent face meshes on large-scale face videos, called the NeuFace-dataset. We investigate how neural re-parameterization helps to reconstruct image-aligned facial details on 3D meshes via gradient analysis. By exploiting the naturalness and diversity of 3D faces in our dataset, we demonstrate the usefulness of our dataset for 3D face-related tasks: improving the reconstruction accuracy of an existing 3D face reconstruction model and learning 3D facial motion prior. Code and datasets will be available at https://neuface-dataset.github.io.Comment: 9 pages, 7 figures, and 3 tables for the main paper. 8 pages, 6 figures and 3 tables for the appendi

    Scene-Adaptive Video Frame Interpolation via Meta-Learning

    Full text link
    Video frame interpolation is a challenging problem because there are different scenarios for each video depending on the variety of foreground and background motion, frame rate, and occlusion. It is therefore difficult for a single network with fixed parameters to generalize across different videos. Ideally, one could have a different network for each scenario, but this is computationally infeasible for practical applications. In this work, we propose to adapt the model to each video by making use of additional information that is readily available at test time and yet has not been exploited in previous works. We first show the benefits of `test-time adaptation' through simple fine-tuning of a network, then we greatly improve its efficiency by incorporating meta-learning. We obtain significant performance gains with only a single gradient update without any additional parameters. Finally, we show that our meta-learning framework can be easily employed to any video frame interpolation network and can consistently improve its performance on multiple benchmark datasets.Comment: CVPR 202

    LaughTalk: Expressive 3D Talking Head Generation with Laughter

    Full text link
    Laughter is a unique expression, essential to affirmative social interactions of humans. Although current 3D talking head generation methods produce convincing verbal articulations, they often fail to capture the vitality and subtleties of laughter and smiles despite their importance in social context. In this paper, we introduce a novel task to generate 3D talking heads capable of both articulate speech and authentic laughter. Our newly curated dataset comprises 2D laughing videos paired with pseudo-annotated and human-validated 3D FLAME parameters and vertices. Given our proposed dataset, we present a strong baseline with a two-stage training scheme: the model first learns to talk and then acquires the ability to express laughter. Extensive experiments demonstrate that our method performs favorably compared to existing approaches in both talking head generation and expressing laughter signals. We further explore potential applications on top of our proposed method for rigging realistic avatars.Comment: Accepted to WACV202

    Solution processed flexible organic thin film back-gated transistors based on polyimide dielectric films

    No full text
    An organic thin film back-gated transistor (OBGT) was fabricated and characterized. The gate electrode was printed on the back side of substrate, and the dielectric layer was omitted by substituting the dielectric layer with the polyimide (PI) film substrate. Roll-to-roll (R2R) gravure printing, doctor blading, and drop casting methods were used to fabricate the OBGT. The printed OBGT device shows better performance compared with an OTFT device based on dielectric layer of BaTiO3. Additionally, a calendering process enhanced the performance by a factor of 3 to 7 (mobility: 0.016 cm2/V·s, on/off ratio: 9.17×103). A bending test was conducted to confirm the flexibility and durability of the OBGT device. The results show the fabricated device endures 20000-cyclic motions. The realized OBGT device was successfully fabricated and working, which is meaningful for production engineering from the viewpoint of process development

    Visual Tracking by TridentAlign and Context Embedding

    No full text
    © 2021, Springer Nature Switzerland AG.Recent advances in Siamese network-based visual tracking methods have enabled high performance on numerous tracking benchmarks. However, extensive scale variations of the target object and distractor objects with similar categories have consistently posed challenges in visual tracking. To address these persisting issues, we propose novel TridentAlign and context embedding modules for Siamese network-based visual tracking methods. The TridentAlign module facilitates adaptability to extensive scale variations and large deformations of the target, where it pools the feature representation of the target object into multiple spatial dimensions to form a feature pyramid, which is then utilized in the region proposal stage. Meanwhile, context embedding module aims to discriminate the target from distractor objects by accounting for the global context information among objects. The context embedding module extracts and embeds the global context information of a given frame into a local feature representation such that the information can be utilized in the final classification stage. Experimental results obtained on multiple benchmark datasets show that the performance of the proposed tracker is comparable to that of state-of-the-art trackers, while the proposed tracker runs at real-time speed. (Code available on https://github.com/JanghoonChoi/TACT ).N
    corecore