229 research outputs found

    Self-normalized Cram\'{e}r type moderate deviations for the maximum of sums

    Full text link
    Let X1,X2,...X_1,X_2,... be independent random variables with zero means and finite variances, and let Sn=i=1nXiS_n=\sum_{i=1}^nX_i and Vn2=i=1nXi2V^2_n=\sum_{i=1}^nX^2_i. A Cram\'{e}r type moderate deviation for the maximum of the self-normalized sums max1knSk/Vn\max_{1\leq k\leq n}S_k/V_n is obtained. In particular, for identically distributed X1,X2,...,X_1,X_2,..., it is proved that P(max1knSkxVn)/(1Φ(x))2P(\max_{1\leq k\leq n}S_k\geq xV_n)/(1-\Phi (x))\rightarrow2 uniformly for 0<xo(n1/6)0<x\leq\mathrm{o}(n^{1/6}) under the optimal finite third moment of X1X_1.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ415 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Multimodal Federated Learning via Contrastive Representation Ensemble

    Full text link
    With the increasing amount of multimedia data on modern mobile systems and IoT infrastructures, harnessing these rich multimodal data without breaching user privacy becomes a critical issue. Federated learning (FL) serves as a privacy-conscious alternative to centralized machine learning. However, existing FL methods extended to multimodal data all rely on model aggregation on single modality level, which restrains the server and clients to have identical model architecture for each modality. This limits the global model in terms of both model complexity and data capacity, not to mention task diversity. In this work, we propose Contrastive Representation Ensemble and Aggregation for Multimodal FL (CreamFL), a multimodal federated learning framework that enables training larger server models from clients with heterogeneous model architectures and data modalities, while only communicating knowledge on public dataset. To achieve better multimodal representation fusion, we design a global-local cross-modal ensemble strategy to aggregate client representations. To mitigate local model drift caused by two unprecedented heterogeneous factors stemming from multimodal discrepancy (modality gap and task gap), we further propose two inter-modal and intra-modal contrasts to regularize local training, which complements information of the absent modality for uni-modal clients and regularizes local clients to head towards global consensus. Thorough evaluations and ablation studies on image-text retrieval and visual question answering tasks showcase the superiority of CreamFL over state-of-the-art FL methods and its practical value.Comment: ICLR 2023. Code is available at https://github.com/FLAIR-THU/CreamF

    Simultaneous Confidence Bands in Nonlinear Regression Models with Nonstationarity

    Get PDF
    We consider nonparametric estimation of the regression function g(·) in a nonlinear regression model Yt = g(Xt) + σ(Xt)et, where the regressor (Xt) is a nonstationary unit root process and the error (et) is a sequence of independent and identically distributed (i.i.d.) random variables. With proper centering and scaling, the maximum deviation of the local linear estimator of the regression function g is shown to be asymptotically Gumbel. Based on the latter result, we construct simultaneous confidence bands for g, which can be used to test patterns of the regression function. Our results substantially extend existing ones which typically require independent or stationary weakly dependent regressors. Furthermore, we examine the finite sample behavior of the proposed approach via the simulated and real data examples

    Multimodal Molecular Pretraining via Modality Blending

    Full text link
    Self-supervised learning has recently gained growing interest in molecular modeling for scientific tasks such as AI-assisted drug discovery. Current studies consider leveraging both 2D and 3D molecular structures for representation learning. However, relying on straightforward alignment strategies that treat each modality separately, these methods fail to exploit the intrinsic correlation between 2D and 3D representations that reflect the underlying structural characteristics of molecules, and only perform coarse-grained molecule-level alignment. To derive fine-grained alignment and promote structural molecule understanding, we introduce an atomic-relation level "blend-then-predict" self-supervised learning approach, MoleBLEND, which first blends atom relations represented by different modalities into one unified relation matrix for joint encoding, then recovers modality-specific information for 2D and 3D structures individually. By treating atom relationships as anchors, MoleBLEND organically aligns and integrates visually dissimilar 2D and 3D modalities of the same molecule at fine-grained atomic level, painting a more comprehensive depiction of each molecule. Extensive experiments show that MoleBLEND achieves state-of-the-art performance across major 2D/3D molecular benchmarks. We further provide theoretical insights from the perspective of mutual-information maximization, demonstrating that our method unifies contrastive, generative (cross-modality prediction) and mask-then-predict (single-modality prediction) objectives into one single cohesive framework

    CapsFusion: Rethinking Image-Text Data at Scale

    Full text link
    Large multimodal models demonstrate remarkable generalist ability to perform diverse multimodal tasks in a zero-shot manner. Large-scale web-based image-text pairs contribute fundamentally to this success, but suffer from excessive noise. Recent studies use alternative captions synthesized by captioning models and have achieved notable benchmark performance. However, our experiments reveal significant Scalability Deficiency and World Knowledge Loss issues in models trained with synthetic captions, which have been largely obscured by their initial benchmark success. Upon closer examination, we identify the root cause as the overly-simplified language structure and lack of knowledge details in existing synthetic captions. To provide higher-quality and more scalable multimodal pretraining data, we propose CapsFusion, an advanced framework that leverages large language models to consolidate and refine information from both web-based image-text pairs and synthetic captions. Extensive experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance (e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample efficiency (requiring 11-16 times less computation than baselines), world knowledge depth, and scalability. These effectiveness, efficiency and scalability advantages position CapsFusion as a promising candidate for future scaling of LMM training.Comment: CVPR 2024. Code & Dataset: https://github.com/baaivision/CapsFusio

    Exploring the Impact of the Digital Economy on Carbon Emission Efficiency Under Factor Misallocation Constraints: New Insights From China

    Get PDF
    The digital economy has introduced far-reaching innovations in the fields of government governance, enterprise production, and social operation. How to motivate the economic development mode towards a low-carbon and greenway transformation through the digital economy is a major issue concerning the Chinese government. However, there is scarce evidence to interpret the role mechanism of the digital economy on carbon emission efficiency from the factor misallocation scenario. Taking a database from 30 provincial-level administrative regions for the period from 2011 to 2019 in China as an example, the paper examines the effect of the digital economy on carbon emission efficiency, as well as explores its role mechanism deeply in terms of factor misallocation (capital misallocation and labor misallocation). The results suggest that there is a significant potential for the digital economy to contribute to carbon emission efficiency, as well as this finding, is valid when considering both the endogeneity issue and a series of robustness checks. Also, the digital economy can significantly contribute to carbon efficiency in both southern and northern regions, but more strongly in the northern region. Besides, the digital economy can inhibit the factor misallocation (labor misallocation and capital misallocation) level which ultimately improves carbon emission efficiency. Finally, as a digital economy, it can positively impact carbon efficiency in the long run by mitigating factor misallocation (labor misallocation and capital misallocation)

    Objective identification and forecast method of PM2.5 pollution based on medium- and long-term ensemble forecasts in Beijing-Tianjin-Hebei region and its surrounding areas

    Get PDF
    Accurate long-term forecasts of PM2.5 pollution are essential to mitigating health risks and formulating pollutant control strategies for decision-makers in China. In this study, an objective identification and forecast method for PM2.5 pollution (OIF-PM2.5) is developed based on medium- and long-term ensemble forecasts of PM2.5 in Beijing-Tianjin-Hebei region and its surrounding areas. The results show that the observed PM2.5 pollution ratio increases with the aggravating PM2.5 pollution. For example, the ratio of meteorological stations with heavy pollution is 4.4 times that of light pollution and 3.9 times that of moderate pollution. In addition, the correlation coefficients between observations and forecasts are above 0.60 for all forecast leading times. Statistical results show that the average accuracy for forecasts with the leading times of 1–3 days, 4–7 days, and 8–15 days are 74.1%, 81.3%, and 72.9% respectively, indicating that the OIF-PM2.5 method has a high reliability in forecasts with the leading times of 1–15 days. The OIF-PM2.5 method is further applied in a severe PM2.5 pollution episode in the December of 2021, and the average forecast precision in forecasts with the leading times of 6–8 days reaches as high as 100%, showing a certain reference value for PM2.5 forecasts

    Generative Pretraining in Multimodality

    Full text link
    We present Emu, a Transformer-based multimodal foundation model, which can seamlessly generate images and texts in multimodal context. This omnivore model can take in any single-modality or multimodal data input indiscriminately (e.g., interleaved image, text and video) through a one-model-for-all autoregressive training process. First, visual signals are encoded into embeddings, and together with text tokens form an interleaved input sequence. Emu is then end-to-end trained with a unified objective of classifying the next text token or regressing the next visual embedding in the multimodal sequence. This versatile multimodality empowers the exploration of diverse pretraining data sources at scale, such as videos with interleaved frames and text, webpages with interleaved images and text, as well as web-scale image-text pairs and video-text pairs. Emu can serve as a generalist multimodal interface for both image-to-text and text-to-image tasks, and supports in-context image and text generation. Across a broad range of zero-shot/few-shot tasks including image captioning, visual question answering, video question answering and text-to-image generation, Emu demonstrates superb performance compared to state-of-the-art large multimodal models. Extended capabilities such as multimodal assistants via instruction tuning are also demonstrated with impressive performance.Comment: Code and Demo: https://github.com/baaivision/Em
    corecore