136 research outputs found

    Proximity Operators of Discrete Information Divergences

    Get PDF
    Information divergences allow one to assess how close two distributions are from each other. Among the large panel of available measures, a special attention has been paid to convex φ\varphi-divergences, such as Kullback-Leibler, Jeffreys-Kullback, Hellinger, Chi-Square, Renyi, and Iα_{\alpha} divergences. While φ\varphi-divergences have been extensively studied in convex analysis, their use in optimization problems often remains challenging. In this regard, one of the main shortcomings of existing methods is that the minimization of φ\varphi-divergences is usually performed with respect to one of their arguments, possibly within alternating optimization techniques. In this paper, we overcome this limitation by deriving new closed-form expressions for the proximity operator of such two-variable functions. This makes it possible to employ standard proximal methods for efficiently solving a wide range of convex optimization problems involving φ\varphi-divergences. In addition, we show that these proximity operators are useful to compute the epigraphical projection of several functions of practical interest. The proposed proximal tools are numerically validated in the context of optimal query execution within database management systems, where the problem of selectivity estimation plays a central role. Experiments are carried out on small to large scale scenarios

    A Non-Local Structure Tensor Based Approach for Multicomponent Image Recovery Problems

    Full text link
    Non-Local Total Variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the Structure Tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the non-local variations, jointly for the different components, through various ℓ1,p\ell_{1,p} matrix norms with p≥1p \ge 1. To facilitate the choice of the hyper-parameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented thanks to the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for multispectral and hyperspectral images. The results demonstrate the interest of introducing a non-local structure tensor regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods

    Contrastive Learning for Online Semi-Supervised General Continual Learning

    Full text link
    We study Online Continual Learning with missing labels and propose SemiCon, a new contrastive loss designed for partly labeled data. We demonstrate its efficiency by devising a memory-based method trained on an unlabeled data stream, where every data added to memory is labeled using an oracle. Our approach outperforms existing semi-supervised methods when few labels are available, and obtain similar results to state-of-the-art supervised methods while using only 2.6% of labels on Split-CIFAR10 and 10% of labels on Split-CIFAR100.Comment: Accepted at ICIP'2

    1000–32 Spontaneous Evolution of Nonocclusive Coronary Dissection After PTCA: A 6 Month Angiographic Follow-up Study

    Get PDF
    We have previously shown that, when good distal flow is maintained, dissection after PTCA has a favourable short term (24 hrs) evolution and does not require bail-out interventions or CABG.To evaluate the long term (6 months) clinical and angiographic evolution of non occlusive dissection, we submitted 129 consecutive patients (103 male, mean age 53±11 yrs) undergoing elective PTCA (147 lesions, 66 LAD, 49 CX, 32 DX) to repeat angiography 24 hrs and 6 months after the procedure. Lesions were measured by QCA and coronary dissection was graded using the NHLBI classification (types A-E; Huber Am J Cardiol 1991;68: 467). Mean stenosis was 85±11% before and 25±7% immediately after PTCA (p<0.001). Residual stenosis was not significantly different at the 24 hrs restudy (24±9%). Non occlusive coronary dissection (flow TIMI grade 3 in all pts) was seen in 49/147 lesions (33%) and evolved as follows:Dissection (tot)Immediate 49 (33%)24 hrs 41 (28%)6 months 18 (12%)A332710B1085C442D221At the 6 month follow-up study, restenosis was seen in 51/147 lesions (34%), of which 5/49 (10%) had dissection and 46/106 (43%) did not. No cardiovascular events or recurrence of symptoms were recorded in the absence of restenosis.Therefore 1) nonocclusive dissection after PTCA usually improves after 6 month; 2) in the absence of flow impairment and ischemia this complication does not require any further intervention; 3) non occlusive dissection is not associated with increased incidence of restenosis

    Domain-Aware Augmentations for Unsupervised Online General Continual Learning

    Full text link
    Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information. While previous research has focused on reducing forgetting in supervised setups, recent studies have shown that self-supervised learners are more resilient to forgetting. This paper proposes a novel approach that enhances memory usage for contrastive learning in UOGCL by defining and using stream-dependent data augmentations together with some implementation tricks. Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups, and reduces the gap between supervised and unsupervised continual learning. Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning.Comment: Accepted to BMVC'2

    Learning Representations on the Unit Sphere: Application to Online Continual Learning

    Full text link
    We use the maximum a posteriori estimation principle for learning representations distributed on the unit sphere. We derive loss functions for the von Mises-Fisher distribution and the angular Gaussian distribution, both designed for modeling symmetric directional data. A noteworthy feature of our approach is that the learned representations are pushed toward fixed directions, allowing for a learning strategy that is resilient to data drift. This makes it suitable for online continual learning, which is the problem of training neural networks on a continuous data stream, where multiple classification tasks are presented sequentially so that data from past tasks are no longer accessible, and data from the current task can be seen only once. To address this challenging scenario, we propose a memory-based representation learning technique equipped with our new loss functions. Our approach does not require negative data or knowledge of task boundaries and performs well with smaller batch sizes while being computationally efficient. We demonstrate with extensive experiments that the proposed method outperforms the current state-of-the-art methods on both standard evaluation scenarios and realistic scenarios with blurry task boundaries. For reproducibility, we use the same training pipeline for every compared method and share the code at https://t.ly/SQTj.Comment: 16 pages, 4 figures, under revie

    A proximal approach for constrained cosparse modelling

    Get PDF
    International audienceThe concept of cosparsity has been recently introduced in the arena of compressed sensing. In cosparse modelling, the â„“0 (or â„“1) cost of an analysis-based representation of the target signal is minimized under a data fidelity constraint. By taking benefit from recent advances in proximal algorithms, we show that it is possible to efficiently address a more general framework where a convex block sparsity measure is minimized under various convex constraints. The main contribution of this work is the introduction of a new epigraphical projection technique, which allows us to consider more flexible data fidelity constraints than the standard linear or quadratic ones. The validity of our approach is illustrated through an application to an image reconstruction problem in the presence of Poisson noise
    • …
    corecore