63 research outputs found

    Compressive Sensing with Tensorized Autoencoder

    Full text link
    Deep networks can be trained to map images into a low-dimensional latent space. In many cases, different images in a collection are articulated versions of one another; for example, same object with different lighting, background, or pose. Furthermore, in many cases, parts of images can be corrupted by noise or missing entries. In this paper, our goal is to recover images without access to the ground-truth (clean) images using the articulations as structural prior of the data. Such recovery problems fall under the domain of compressive sensing. We propose to learn autoencoder with tensor ring factorization on the the embedding space to impose structural constraints on the data. In particular, we use a tensor ring structure in the bottleneck layer of the autoencoder that utilizes the soft labels of the structured dataset. We empirically demonstrate the effectiveness of the proposed approach for inpainting and denoising applications. The resulting method achieves better reconstruction quality compared to other generative prior-based self-supervised recovery approaches for compressive sensing

    Generative Models for Low-Rank Video Representation and Reconstruction

    Get PDF
    Finding compact representation of videos is an essential component in almost every problem related to video processing or understanding. In this paper, we propose a generative model to learn compact latent codes that can efficiently represent and reconstruct a video sequence from its missing or under-sampled measurements. We use a generative network that is trained to map a compact code into an image. We first demonstrate that if a video sequence belongs to the range of the pretrained generative network, then we can recover it by estimating the underlying compact latent codes. Then we demonstrate that even if the video sequence does not belong to the range of a pretrained network, we can still recover the true video sequence by jointly updating the latent codes and the weights of the generative network. To avoid overfitting in our model, we regularize the recovery problem by imposing low-rank and similarity constraints on the latent codes of the neighboring frames in the video sequence. We use our methods to recover a variety of videos from compressive measurements at different compression rates. We also demonstrate that we can generate missing frames in a video sequence by interpolating the latent codes of the observed frames in the low-dimensional space

    The molecular mechanisms of steroid hormone action

    Get PDF
    (a) Clinical Aspects The relationship between the molecular forms of oestrogen receptor (4S and 8S forms) in human breast cancer and subsequent response to hormone therapy is controversial. The data presented in this thesis show that several factors can effect the final sucrose density gradient profile of soluble oestrogen receptor under low salt conditions. These include incubation time with steroid, temperature, ionic strength, extent of aggregation and intratumoural variation. It is further shown that buffer made 50% in glycerol can be used to preserve the molecular form of oestrogen receptor in human breast tumour biopsies prior to and subsequent to transportation. Receptor 8S form was preserved for up to 3 months under these conditions. Most tumour biopsies analyzed exhibited the presence of 8S form of the receptor either alone or in conjunction with the 4S form. Relatively few tumours exhibited predominantly the 4S form. Analysis of intratumoural sections revealed a loss of receptor concentration towards the centre of the tumour. The molecular forms found across a tumour usually remained constant. However, when both 8S and 4S forms of receptor were detected, the relative concentration of each form changed across the tumour. These results indicate that strict criteria, with respect to analysis of molecular forms of oestrogen receptor, must be observed if these are to be related to potential response of individual patients to endocrine therapy. (b) Receptor activation/transformation The mechanism of receptor activation/transformation was studied in immature rat uterus, human breast carcinoma and endometrial tissue. DNA-cellulose binding was characterized as an in vitro acceptor of activated receptor (3

    Factorized Tensor Networks for Multi-Task and Multi-Domain Learning

    Full text link
    Multi-task and multi-domain learning methods seek to learn multiple tasks/domains, jointly or one after another, using a single unified network. The key challenge and opportunity is to exploit shared information across tasks and domains to improve the efficiency of the unified network. The efficiency can be in terms of accuracy, storage cost, computation, or sample complexity. In this paper, we propose a factorized tensor network (FTN) that can achieve accuracy comparable to independent single-task/domain networks with a small number of additional parameters. FTN uses a frozen backbone network from a source model and incrementally adds task/domain-specific low-rank tensor factors to the shared frozen network. This approach can adapt to a large number of target domains and tasks without catastrophic forgetting. Furthermore, FTN requires a significantly smaller number of task-specific parameters compared to existing methods. We performed experiments on widely used multi-domain and multi-task datasets. We show the experiments on convolutional-based architecture with different backbones and on transformer-based architecture. We observed that FTN achieves similar accuracy as single-task/domain methods while using only a fraction of additional parameters per task

    A Rare Presentation of Cardiac Aspergilloma in an Immunocompetent Host: Case Report and Literature Review

    Get PDF
    Cardiac aspergilloma is exceptionally rare with only a handful of cases reported and majority of them being in immunocompromised patients. Here, we present a case of cardiac aspergilloma involving the right and left ventricle in an immunocompetent patient that initially presented with acute limb ischemia. He was later found to have a cardiac mass with histopathological diagnosis confirming Aspergillus species. Despite aggressive medical and surgical interventions, the patient had an unfavorable outcome due to low suspicion of invasive fungal endocarditis given his immunocompetent status. Cardiac aspergilloma should remain in the differential diagnosis of immunocompetent patients as early clinical suspicion will result in early treatment and decreased mortality. Novel therapies are required to decrease mortality in the future from this fatal disease

    Incremental Task Learning with Incremental Rank Updates

    Full text link
    Incremental Task learning (ITL) is a category of continual learning that seeks to train a single network for multiple tasks (one after another), where training data for each task is only available during the training of that task. Neural networks tend to forget older tasks when they are trained for the newer tasks; this property is often known as catastrophic forgetting. To address this issue, ITL methods use episodic memory, parameter regularization, masking and pruning, or extensible network structures. In this paper, we propose a new incremental task learning framework based on low-rank factorization. In particular, we represent the network weights for each layer as a linear combination of several rank-1 matrices. To update the network for a new task, we learn a rank-1 (or low-rank) matrix and add that to the weights of every layer. We also introduce an additional selector vector that assigns different weights to the low-rank matrices learned for the previous tasks. We show that our approach performs better than the current state-of-the-art methods in terms of accuracy and forgetting. Our method also offers better memory efficiency compared to episodic memory- and mask-based approaches. Our code will be available at https://github.com/CSIPlab/task-increment-rank-update.gitComment: Code will be available at https://github.com/CSIPlab/task-increment-rank-update.gi

    Non-Adversarial Video Synthesis with Learned Priors

    Full text link
    Most of the existing works in video synthesis focus on generating videos using adversarial learning. Despite their success, these methods often require input reference frame or fail to generate diverse videos from the given data distribution, with little to no uniformity in the quality of videos that can be generated. Different from these methods, we focus on the problem of generating videos from latent noise vectors, without any reference input frames. To this end, we develop a novel approach that jointly optimizes the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning. Optimizing for the input latent space along with the network weights allows us to generate videos in a controlled environment, i.e., we can faithfully generate all videos the model has seen during the learning process as well as new unseen videos. Extensive experiments on three challenging and diverse datasets well demonstrate that our approach generates superior quality videos compared to the existing state-of-the-art methods.Comment: Accepted to CVPR 202
    corecore