376 research outputs found

    Indole diterpenoid natural products as the inspiration for new synthetic methods and strategies.

    Get PDF
    Indole terpenoids comprise a large class of natural products with diverse structural topologies and a broad range of biological activities. Accordingly, indole terpenoids have and continue to serve as attractive targets for chemical synthesis. Many synthetic efforts over the past few years have focused on a subclass of this family, the indole diterpenoids. This minireview showcases the role indole diterpenoids have played in inspiring the recent development of clever synthetic strategies, and new chemical reactions

    Three Essays on Market Anomalies and Financial Econometrics

    Get PDF
    This dissertation consists of two chapters about the momentum and idiosyncratic volatility anomalies, respectively, and one chapter about estimating clustered standard errors. Chapter 1, Flights to Quality and Momentum Crashes, relates crashes of momentum strategies in stock markets around the world to investor behavior called flight to quality phenomena. The momentum crashes, defined as extremely negative returns of momentum portfolios, occur in most developed stock markets and are centered in economic recovery periods after recessions. I find that their negative returns and negative market betas are associated with investor behavior known as flights to quality (FTQ). Low quality—i.e., high default risk—stocks experience larger investor withdrawals and consequential stock price plunges at financial market collapses, featuring higher market betas particularly during recessions. So the momentum strategies, which tend to sell these plunging stocks, exhibit negative market betas before their crashes and underperform once those stocks bounce back to an economic recovery phase. Worldwide momentum returns and two FTQ proxies, US institutional ownership changes and stock market-bond market disagreements, show consistent results. Chapter 2, Which Volatility Drives the Anomaly? Cash Flow Versus Discount Rate, examines whether the cross-sectional idiosyncratic volatility anomaly is because of the volatility’s cash flow news part or its discount rate news counterpart. In detail, I reexamine the idiosyncratic volatility anomaly of Ang et al. (2006) and investigate the relative importance of cash flow news and discount rate counterpart in driving this anomaly using the news decomposition of Vuolteenaho (2002). The results from idiosyncratic volatility-sorted portfolios show that the arbitrage portfolio with two extreme portfolios earns about 1.3 (1.2) percent quarterly alpha return after the market factor (the Fama–French factors). I also create two decile portfolios sorted on discount rate news volatilities and cash flow news counterparts. While the average return of the arbitrage portfolio from discount rate news volatilities is insignificant, the counterpart from cash flow news volatilities exhibits about 1.5 (1.2) percent quarterly alpha return after the market factor (the Fama–French factors). These findings indicate that cash flow news rather than discount rate counterpart governs most of the anomaly. The results suggest that investors prefer cash flow news volatilities to discount rate news counterparts, and hence not all idiosyncratic volatilities are equally priced in the cross-section. Chapter 3, Multiway Clustered Standard Errors in Finite Samples, proposes new clustered standard errors less biased than existing clustered standard error estimators in finite samples. Specifically, I demonstrate the downward bias of existing one-way and two-way clustered standard error estimators (Petersen, 2009; Thompson, 2011) in finite samples using Monte Carlo simulations. When there exist both firm and time effects in a panel regression with N≫T, a firm clustered standard error is always the worst. A clustered standard error estimator by time is the third best, but worsens as T increases. A clustered standard error estimator by both firm and time is the second best, but is biased downward in finite samples. I suggest two first best standard error estimators that always outperform the other competitors

    High dimensional discriminant rules with shrinkage estimators of covariance matrix and mean vector

    Full text link
    Linear discriminant analysis is a typical method used in the case of large dimension and small samples. There are various types of linear discriminant analysis methods, which are based on the estimations of the covariance matrix and mean vectors. Although there are many methods for estimating the inverse matrix of covariance and the mean vectors, we consider shrinkage methods based on non-parametric approach. In the case of the precision matrix, the methods based on either the sparsity structure or the data splitting are considered. Regarding the estimation of mean vectors, nonparametric empirical Bayes (NPEB) estimator and nonparametric maximum likelihood estimation (NPMLE) methods are adopted which are also called f-modeling and g-modeling, respectively. We analyzed the performances of linear discriminant rules which are based on combined estimation strategies of the covariance matrix and mean vectors. In particular, we present a theoretical result on the performance of the NPEB method and compare that with the results from other methods in previous studies. We provide simulation studies for various structures of covariance matrices and mean vectors to evaluate the methods considered in this paper. In addition, real data examples such as gene expressions and EEG data are presented.Comment: 39 pages, 3 figure

    A Comparative Analysis of International and Chinese Electronic Commerce Research

    Get PDF
    Due to the growth of the Internet and e-commerce, both practitioners and researchers are in the midst of a social, business and culture revolution. Internet and e-commerce related research has been developed and grown up by United States, but China has become one of the most exciting research areas. This literature review consists of 1044 journal articles published between 1993 and 2003 in fourteen International and Chinese journals. The articles are classified by a scheme that consists of four main categories: application areas, technological issues, support and implementation and others. Based on the classification and analysis of e-commerce related researches, we present the current state of International and Chinese research and discuss the differences between them

    Controller Area Network With Flexible Data Rate (CAN FD) Eye Diagram Prediction

    Get PDF
    A method for predicting the eye diagram for a controller area network with a flexible data rate (CAN FD) is proposed in this article. A CAN FD changes a data rate according to the status to overcome the limitation of latency. In other words, when data to be transmitted are accumulated, the CAN FD increases the data rate up to 5 Mb/s. The CAN FD has a bus topology consisting of multiple electronic control units, which results in a significant amount of signal reflection. Thus, the above causes the signal integrity analysis uncertain. To avoid this, this article proposes a simplified model for the CAN FD and the eye diagram prediction method based on it. The proposed method has the deterministic and statistical: the deterministic part uses an iterative single bit response method for bit probabilities of a CAN FD packet, and the statistical part uses a modified double edge response method for the flexible data rate. For verification, this article compares the predicted eye diagram to the measured eye diagram, and they are nearly the same when the CAN FD operates at the nominal data rate of 1 and optional data rate of 2 Mb/s

    Instance-Aware Group Quantization for Vision Transformers

    Full text link
    Post-training quantization (PTQ) is an efficient model compression technique that quantizes a pretrained full-precision model using only a small calibration set of unlabeled samples without retraining. PTQ methods for convolutional neural networks (CNNs) provide quantization results comparable to full-precision counterparts. Directly applying them to vision transformers (ViTs), however, incurs severe performance degradation, mainly due to the differences in architectures between CNNs and ViTs. In particular, the distribution of activations for each channel vary drastically according to input instances, making PTQ methods for CNNs inappropriate for ViTs. To address this, we introduce instance-aware group quantization for ViTs (IGQ-ViT). To this end, we propose to split the channels of activation maps into multiple groups dynamically for each input instance, such that activations within each group share similar statistical properties. We also extend our scheme to quantize softmax attentions across tokens. In addition, the number of groups for each layer is adjusted to minimize the discrepancies between predictions from quantized and full-precision models, under a bit-operation (BOP) constraint. We show extensive experimental results on image classification, object detection, and instance segmentation, with various transformer architectures, demonstrating the effectiveness of our approach.Comment: CVPR 202

    NeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior

    Full text link
    Audio-driven talking head generation is advancing from 2D to 3D content. Notably, Neural Radiance Field (NeRF) is in the spotlight as a means to synthesize high-quality 3D talking head outputs. Unfortunately, this NeRF-based approach typically requires a large number of paired audio-visual data for each identity, thereby limiting the scalability of the method. Although there have been attempts to generate audio-driven 3D talking head animations with a single image, the results are often unsatisfactory due to insufficient information on obscured regions in the image. In this paper, we mainly focus on addressing the overlooked aspect of 3D consistency in the one-shot, audio-driven domain, where facial animations are synthesized primarily in front-facing perspectives. We propose a novel method, NeRFFaceSpeech, which enables to produce high-quality 3D-aware talking head. Using prior knowledge of generative models combined with NeRF, our method can craft a 3D-consistent facial feature space corresponding to a single image. Our spatial synchronization method employs audio-correlated vertex dynamics of a parametric face model to transform static image features into dynamic visuals through ray deformation, ensuring realistic 3D facial motion. Moreover, we introduce LipaintNet that can replenish the lacking information in the inner-mouth area, which can not be obtained from a given single image. The network is trained in a self-supervised manner by utilizing the generative capabilities without additional data. The comprehensive experiments demonstrate the superiority of our method in generating audio-driven talking heads from a single image with enhanced 3D consistency compared to previous approaches. In addition, we introduce a quantitative way of measuring the robustness of a model against pose changes for the first time, which has been possible only qualitatively.Comment: 11 pages, 5 figure

    R-Pred: Two-Stage Motion Prediction Via Tube-Query Attention-Based Trajectory Refinement

    Full text link
    Predicting the future motion of dynamic agents is of paramount importance to ensure safety or assess risks in motion planning for autonomous robots. In this paper, we propose a two-stage motion prediction method, referred to as R-Pred, that effectively utilizes both the scene and interaction context using a cascade of the initial trajectory proposal network and the trajectory refinement network. The initial trajectory proposal network produces M trajectory proposals corresponding to M modes of a future trajectory distribution. The trajectory refinement network enhances each of M proposals using 1) the tube-query scene attention (TQSA) and 2) the proposal-level interaction attention (PIA). TQSA uses tube-queries to aggregate the local scene context features pooled from proximity around the trajectory proposals of interest. PIA further enhances the trajectory proposals by modeling inter-agent interactions using a group of trajectory proposals selected based on their distances from neighboring agents. Our experiments conducted on the Argoverse and nuScenes datasets demonstrate that the proposed refinement network provides significant performance improvements compared to the single-stage baseline and that R-Pred achieves state-of-the-art performance in some categories of the benchmark

    Few-shot Neural Radiance Fields Under Unconstrained Illumination

    Full text link
    In this paper, we introduce a new challenge for synthesizing novel view images in practical environments with limited input multi-view images and varying lighting conditions. Neural radiance fields (NeRF), one of the pioneering works for this task, demand an extensive set of multi-view images taken under constrained illumination, which is often unattainable in real-world settings. While some previous works have managed to synthesize novel views given images with different illumination, their performance still relies on a substantial number of input multi-view images. To address this problem, we suggest ExtremeNeRF, which utilizes multi-view albedo consistency, supported by geometric alignment. Specifically, we extract intrinsic image components that should be illumination-invariant across different views, enabling direct appearance comparison between the input and novel view under unconstrained illumination. We offer thorough experimental results for task evaluation, employing the newly created NeRF Extreme benchmark-the first in-the-wild benchmark for novel view synthesis under multiple viewing directions and varying illuminations.Comment: Project Page: https://seokyeong94.github.io/ExtremeNeRF
    corecore