158 research outputs found

    On spectra and Brown's spectral measures of elements in free products of matrix algebras

    Full text link
    We compute spectra and Brown measures of some non self-adjoint operators in (M_2(\cc), {1/2}Tr)*(M_2(\cc), {1/2}Tr), the reduced free product von Neumann algebra of M_2(\cc) with M_2(\cc). Examples include ABAB and A+BA+B, where A and B are matrices in (M_2(\cc), {1/2}Tr)*1 and 1*(M_2(\cc), {1/2}Tr), respectively. We prove that AB is an R-diagonal operator (in the sense of Nica and Speicher \cite{N-S1}) if and only if Tr(A)=Tr(B)=0. We show that if X=AB or X=A+B and A,B are not scalar matrices, then the Brown measure of X is not concentrated on a single point. By a theorem of Haagerup and Schultz \cite{H-S1}, we obtain that if X=AB or X=A+B and X≠λ1X\neq \lambda 1, then X has a nontrivial hyperinvariant subspace affiliated with (M_2(\cc), {1/2}Tr)*(M_2(\cc), {1/2}Tr).Comment: final version. to appear on Math. Sca

    GSA-Net: gated scaled dot-product attention based neural network for reading comprehension

    Get PDF
    Reading Comprehension (RC) is concerned with building systems that automatically answer questions about a given context passage. The interactions between the context and question are very important to locate the correct answer. In this paper, we propose a Gated Scaled DotProduct Attention based model for RC task. The character-level embedding is incorporated into the word embedding which is helpful to deal with Out-of-Vocabulary (OOV) tokens. The attention distribution is obtained by scaled dot product which captures the interaction between question and passage effectively. Further, self-matching attention mechanism is adopted to resolve the problem of long-distance dependency. These components provides more information for the prediction of the starting and ending position of the answer. We evaluate our method on Stanford Question Answering Dataset (SQuAD) and the results show that different components in the model boost the performance

    Bayesian predictive modeling for genomic based personalized treatment selection

    Get PDF
    Efforts to personalize medicine in oncology have been limited by reductive characterizations of the intrinsically complex underlying biological phenomena. Future advances in personalized medicine will rely on molecular signatures that derive from synthesis of multifarious interdependent molecular quantities requiring robust quantitative methods. However, highly-parameterized statistical models when applied in these settings often require a prohibitively large database and are sensitive to proper characterizations of the treatment-by-covariate interactions, which in practice are difficult to specify and may be limited by generalized linear models. In this paper, we present a Bayesian predictive framework that enables the integration of a high-dimensional set of genomic features with clinical responses and treatment histories of historical patients, providing a probabilistic basis for using the clinical and molecular information to personalize therapy for future patients. Our work represents one of the first attempts to define personalized treatment assignment rules based on large-scale genomic data. We use actual gene expression data acquired from The Cancer Genome Atlas in the settings of leukemia and glioma to explore the statistical properties of our proposed Bayesian approach for personalizing treatment selection. The method is shown to yield considerable improvements in predictive accuracy when compared to penalized regression approaches

    How to Coordinate Supply Chain Under O2O Business Model When Demand Deviation Happens

    Get PDF
    In this paper, a supply chain consisting of one supplier and multiple O2O retailers is studied. The supply chain is coordinated under the revenue-sharing contract in the static case. Disruptions make the price sensitivity coefficient change after the production plan is formulated. In centralized supply chain, the supplier only needs to adjust the retail price if the disruption is in a certain range. When the disruption is large enough, what the supplier needs to do is adjust the retail price and the production quantities. In decentralized decision, the supply chain cannot be coordinated. This means that the original revenue-sharing contract cannot coordinate the disrupted supply chain. An improved revenue-sharing contract is used to coordinate the disrupted supply chain. The research shows that the improved contract can coordinate the original supply chain and the disrupted supply chain, which means that the contract has robustness when facing demand deviation

    Learning a More Continuous Zero Level Set in Unsigned Distance Fields through Level Set Projection

    Full text link
    Latest methods represent shapes with open surfaces using unsigned distance functions (UDFs). They train neural networks to learn UDFs and reconstruct surfaces with the gradients around the zero level set of the UDF. However, the differential networks struggle from learning the zero level set where the UDF is not differentiable, which leads to large errors on unsigned distances and gradients around the zero level set, resulting in highly fragmented and discontinuous surfaces. To resolve this problem, we propose to learn a more continuous zero level set in UDFs with level set projections. Our insight is to guide the learning of zero level set using the rest non-zero level sets via a projection procedure. Our idea is inspired from the observations that the non-zero level sets are much smoother and more continuous than the zero level set. We pull the non-zero level sets onto the zero level set with gradient constraints which align gradients over different level sets and correct unsigned distance errors on the zero level set, leading to a smoother and more continuous unsigned distance field. We conduct comprehensive experiments in surface reconstruction for point clouds, real scans or depth maps, and further explore the performance in unsupervised point cloud upsampling and unsupervised point normal estimation with the learned UDF, which demonstrate our non-trivial improvements over the state-of-the-art methods. Code is available at https://github.com/junshengzhou/LevelSetUDF .Comment: To appear at ICCV2023. Code is available at https://github.com/junshengzhou/LevelSetUD

    Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds

    Full text link
    Surface reconstruction for point clouds is an important task in 3D computer vision. Most of the latest methods resolve this problem by learning signed distance functions (SDF) from point clouds, which are limited to reconstructing shapes or scenes with closed surfaces. Some other methods tried to represent shapes or scenes with open surfaces using unsigned distance functions (UDF) which are learned from large scale ground truth unsigned distances. However, the learned UDF is hard to provide smooth distance fields near the surface due to the noncontinuous character of point clouds. In this paper, we propose a novel method to learn consistency-aware unsigned distance functions directly from raw point clouds. We achieve this by learning to move 3D queries to reach the surface with a field consistency constraint, where we also enable to progressively estimate a more accurate surface. Specifically, we train a neural network to gradually infer the relationship between 3D queries and the approximated surface by searching for the moving target of queries in a dynamic way, which results in a consistent field around the surface. Meanwhile, we introduce a polygonization algorithm to extract surfaces directly from the gradient field of the learned UDF. The experimental results in surface reconstruction for synthetic and real scan data show significant improvements over the state-of-the-art under the widely used benchmarks.Comment: Accepted by NeurIPS 2022. Project page:https://junshengzhou.github.io/CAP-UDF. Code:https://github.com/junshengzhou/CAP-UD

    Animalization of Industrial Structure Transformation on Economic Growth in Liaoning’s Province

    Get PDF
    Industrial structure and economic growth are independent. Based on the new statistical figures of Liaoning, this paper analyzes the contribution of industrial structure to economic growth of Liaoning Province with econometrics method. Then put forward some suggestions.Key words: Industrial structure; Theory of grey system; Economic growt

    Uni3D: Exploring Unified 3D Representation at Scale

    Full text link
    Scaling up representations for images or text has been extensively investigated in the past few years and has led to revolutions in learning vision and language. However, scalable representation for 3D objects and scenes is relatively unexplored. In this work, we present Uni3D, a 3D foundation model to explore the unified 3D representation at scale. Uni3D uses a 2D initialized ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features. Via the simple architecture and pretext task, Uni3D can leverage abundant 2D pretrained models as initialization and image-text aligned models as the target, unlocking the great potential of 2D models and scaling-up strategies to the 3D world. We efficiently scale up Uni3D to one billion parameters, and set new records on a broad range of 3D tasks, such as zero-shot classification, few-shot classification, open-world understanding and part segmentation. We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild. We believe that Uni3D provides a new direction for exploring both scaling up and efficiency of the representation in 3D domain.Comment: Code and Demo: https://github.com/baaivision/Uni3

    Differentiable Registration of Images and LiDAR Point Clouds with VoxelPoint-to-Pixel Matching

    Full text link
    Cross-modality registration between 2D images from cameras and 3D point clouds from LiDARs is a crucial task in computer vision and robotic. Previous methods estimate 2D-3D correspondences by matching point and pixel patterns learned by neural networks, and use Perspective-n-Points (PnP) to estimate rigid transformation during post-processing. However, these methods struggle to map points and pixels to a shared latent space robustly since points and pixels have very different characteristics with patterns learned in different manners (MLP and CNN), and they also fail to construct supervision directly on the transformation since the PnP is non-differentiable, which leads to unstable registration results. To address these problems, we propose to learn a structured cross-modality latent space to represent pixel features and 3D features via a differentiable probabilistic PnP solver. Specifically, we design a triplet network to learn VoxelPoint-to-Pixel matching, where we represent 3D elements using both voxels and points to learn the cross-modality latent space with pixels. We design both the voxel and pixel branch based on CNNs to operate convolutions on voxels/pixels represented in grids, and integrate an additional point branch to regain the information lost during voxelization. We train our framework end-to-end by imposing supervisions directly on the predicted pose distribution with a probabilistic PnP solver. To explore distinctive patterns of cross-modality features, we design a novel loss with adaptive-weighted optimization for cross-modality feature description. The experimental results on KITTI and nuScenes datasets show significant improvements over the state-of-the-art methods. The code and models are available at https://github.com/junshengzhou/VP2P-Match.Comment: To appear at NeurIPS2023 (Spotlight). Code is available at https://github.com/junshengzhou/VP2P-Matc

    GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation

    Full text link
    Text-to-3D generation by distilling pretrained large-scale text-to-image diffusion models has shown great promise but still suffers from inconsistent 3D geometric structures (Janus problems) and severe artifacts. The aforementioned problems mainly stem from 2D diffusion models lacking 3D awareness during the lifting. In this work, we present GeoDream, a novel method that incorporates explicit generalized 3D priors with 2D diffusion priors to enhance the capability of obtaining unambiguous 3D consistent geometric structures without sacrificing diversity or fidelity. Specifically, we first utilize a multi-view diffusion model to generate posed images and then construct cost volume from the predicted image, which serves as native 3D geometric priors, ensuring spatial consistency in 3D space. Subsequently, we further propose to harness 3D geometric priors to unlock the great potential of 3D awareness in 2D diffusion priors via a disentangled design. Notably, disentangling 2D and 3D priors allows us to refine 3D geometric priors further. We justify that the refined 3D geometric priors aid in the 3D-aware capability of 2D diffusion priors, which in turn provides superior guidance for the refinement of 3D geometric priors. Our numerical and visual comparisons demonstrate that GeoDream generates more 3D consistent textured meshes with high-resolution realistic renderings (i.e., 1024 Ă—\times 1024) and adheres more closely to semantic coherence.Comment: Code and Demo: https://github.com/baaivision/GeoDrea
    • …
    corecore