252 research outputs found
Analysis of the Valuation of Tesla Inc
Tesla, as a world-famous electric vehicle brand, has been continuously innovating and leading the industry. In recent years, Tesla’s share price has also been at a high level. With SWOT and other tools to analyze Tesla’s internal and external and industrial environment, and the relative valuation method, this paper evaluates Tesla and conclude that Tesla’s share price is overvalued and will show a downward trend in the near future. However, from a medium to long term perspective, Tesla is still worth investing in, so it gives the investment that investors can buy at this low price
ECM-OPCC: Efficient Context Model for Octree-based Point Cloud Compression
Recently, deep learning methods have shown promising results in point cloud
compression. For octree-based point cloud compression, previous works show that
the information of ancestor nodes and sibling nodes are equally important for
predicting current node. However, those works either adopt insufficient context
or bring intolerable decoding complexity (e.g. >600s). To address this problem,
we propose a sufficient yet efficient context model and design an efficient
deep learning codec for point clouds. Specifically, we first propose a
window-constrained multi-group coding strategy to exploit the autoregressive
context while maintaining decoding efficiency. Then, we propose a dual
transformer architecture to utilize the dependency of current node on its
ancestors and siblings. We also propose a random-masking pre-train method to
enhance our model. Experimental results show that our approach achieves
state-of-the-art performance for both lossy and lossless point cloud
compression. Moreover, our multi-group coding strategy saves 98% decoding time
compared with previous octree-based compression method
Spectral Representation Learning for Conditional Moment Models
Many problems in causal inference and economics can be formulated in the
framework of conditional moment models, which characterize the target function
through a collection of conditional moment restrictions. For nonparametric
conditional moment models, efficient estimation often relies on preimposed
conditions on various measures of ill-posedness of the hypothesis space, which
are hard to validate when flexible models are used. In this work, we address
this issue by proposing a procedure that automatically learns representations
with controlled measures of ill-posedness. Our method approximates a linear
representation defined by the spectral decomposition of a conditional
expectation operator, which can be used for kernelized estimators and is known
to facilitate minimax optimal estimation in certain settings. We show this
representation can be efficiently estimated from data, and establish L2
consistency for the resulting estimator. We evaluate the proposed method on
proximal causal inference tasks, exhibiting promising performance on
high-dimensional, semi-synthetic data
Statistical Guarantees of Group-Invariant GANs
Group-invariant generative adversarial networks (GANs) are a type of GANs in
which the generators and discriminators are hardwired with group symmetries.
Empirical studies have shown that these networks are capable of learning
group-invariant distributions with significantly improved data efficiency. In
this study, we aim to rigorously quantify this improvement by analyzing the
reduction in sample complexity for group-invariant GANs. Our findings indicate
that when learning group-invariant distributions, the number of samples
required for group-invariant GANs decreases proportionally with a power of the
group size, and this power depends on the intrinsic dimension of the
distribution's support. To our knowledge, this work presents the first
statistical estimation for group-invariant generative models, specifically for
GANs, and it may shed light on the study of other group-invariant generative
models
3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment
3D vision-language grounding (3D-VL) is an emerging field that aims to
connect the 3D physical world with natural language, which is crucial for
achieving embodied intelligence. Current 3D-VL models rely heavily on
sophisticated modules, auxiliary losses, and optimization tricks, which calls
for a simple and unified model. In this paper, we propose 3D-VisTA, a
pre-trained Transformer for 3D Vision and Text Alignment that can be easily
adapted to various downstream tasks. 3D-VisTA simply utilizes self-attention
layers for both single-modal modeling and multi-modal fusion without any
sophisticated task-specific design. To further enhance its performance on 3D-VL
tasks, we construct ScanScribe, the first large-scale 3D scene-text pairs
dataset for 3D-VL pre-training. ScanScribe contains 2,995 RGB-D scans for 1,185
unique indoor scenes originating from ScanNet and 3R-Scan datasets, along with
paired 278K scene descriptions generated from existing 3D-VL tasks, templates,
and GPT-3. 3D-VisTA is pre-trained on ScanScribe via masked language/object
modeling and scene-text matching. It achieves state-of-the-art results on
various 3D-VL tasks, ranging from visual grounding and dense captioning to
question answering and situated reasoning. Moreover, 3D-VisTA demonstrates
superior data efficiency, obtaining strong performance even with limited
annotations during downstream task fine-tuning
Recommended from our members
Matched Shrunken Cone Detector (MSCD): Bayesian Derivations and Case Studies for Hyperspectral Target Detection
Hyperspectral images (HSIs) possess non-negative properties for both hyperspectral signatures and abundance coefficients, which can be naturally modeled using cone-based representation. However, in hyperspectral target detection, cone-based methods are barely studied. In this paper, we propose a new regularized cone-based representation approach to hyperspectral target detection, as well as its two working models by incorporating into the cone representation l2-norm and l1-norm regularizations, respectively. We call the new approach the matched shrunken cone detector (MSCD). Also important, we provide principled derivations of the proposed MSCD from the Bayesian perspective: we show that MSCD can be derived by assuming a multivariate half-Gaussian distribution or a multivariate half-Laplace distribution as the prior distribution of the coefficients of the models. In the experimental studies, we compare the proposed MSCD with the subspace methods and the sparse representation-based methods for HSI target detection. Two real hyperspectral data sets are used for evaluating the detection performances on sub-pixel targets and full-pixel targets, respectively. Results show that the proposed MSCD can outperform other methods in both cases, demonstrating the competitiveness of the regularized cone-based representation
- …