1,462 research outputs found
Accuracy Booster: Performance Boosting using Feature Map Re-calibration
Convolution Neural Networks (CNN) have been extremely successful in solving
intensive computer vision tasks. The convolutional filters used in CNNs have
played a major role in this success, by extracting useful features from the
inputs. Recently researchers have tried to boost the performance of CNNs by
re-calibrating the feature maps produced by these filters, e.g.,
Squeeze-and-Excitation Networks (SENets). These approaches have achieved better
performance by Exciting up the important channels or feature maps while
diminishing the rest. However, in the process, architectural complexity has
increased. We propose an architectural block that introduces much lower
complexity than the existing methods of CNN performance boosting while
performing significantly better than them. We carry out experiments on the
CIFAR, ImageNet and MS-COCO datasets, and show that the proposed block can
challenge the state-of-the-art results. Our method boosts the ResNet-50
architecture to perform comparably to the ResNet-152 architecture, which is a
three times deeper network, on classification. We also show experimentally that
our method is not limited to classification but also generalizes well to other
tasks such as object detection.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
202
Minimizing Supervision in Multi-label Categorization
Multiple categories of objects are present in most images. Treating this as a
multi-class classification is not justified. We treat this as a multi-label
classification problem. In this paper, we further aim to minimize the
supervision required for providing supervision in multi-label classification.
Specifically, we investigate an effective class of approaches that associate a
weak localization with each category either in terms of the bounding box or
segmentation mask. Doing so improves the accuracy of multi-label
categorization. The approach we adopt is one of active learning, i.e.,
incrementally selecting a set of samples that need supervision based on the
current model, obtaining supervision for these samples, retraining the model
with the additional set of supervised samples and proceeding again to select
the next set of samples. A crucial concern is the choice of the set of samples.
In doing so, we provide a novel insight, and no specific measure succeeds in
obtaining a consistently improved selection criterion. We, therefore, provide a
selection criterion that consistently improves the overall baseline criterion
by choosing the top k set of samples for a varied set of criteria. Using this
criterion, we are able to show that we can retain more than 98% of the fully
supervised performance with just 20% of samples (and more than 96% using 10%)
of the dataset on PASCAL VOC 2007 and 2012. Also, our proposed approach
consistently outperforms all other baseline metrics for all benchmark datasets
and model combinations.Comment: Accepted in CVPR-W 202
PEAR: Primitive enabled Adaptive Relabeling for boosting Hierarchical Reinforcement Learning
Hierarchical reinforcement learning (HRL) has the potential to solve complex
long horizon tasks using temporal abstraction and increased exploration.
However, hierarchical agents are difficult to train due to inherent
non-stationarity. We present primitive enabled adaptive relabeling (PEAR), a
two-phase approach where we first perform adaptive relabeling on a few expert
demonstrations to generate efficient subgoal supervision, and then jointly
optimize HRL agents by employing reinforcement learning (RL) and imitation
learning (IL). We perform theoretical analysis to bound the
sub-optimality of our approach, and derive a generalized plug-and-play
framework for joint optimization using RL and IL. PEAR uses a handful of expert
demonstrations and makes minimal limiting assumptions on the task structure.
Additionally, it can be easily integrated with typical model free RL algorithms
to produce a practical HRL algorithm. We perform experiments on challenging
robotic environments and show that PEAR is able to solve tasks that require
long term decision making. We empirically show that PEAR exhibits improved
performance and sample efficiency over previous hierarchical and
non-hierarchical approaches. We also perform real world robotic experiments on
complex tasks and demonstrate that PEAR consistently outperforms the baselines
CRISP: Curriculum inducing Primitive Informed Subgoal Prediction
Hierarchical reinforcement learning is a promising approach that uses
temporal abstraction to solve complex long horizon problems. However,
simultaneously learning a hierarchy of policies is unstable as it is
challenging to train higher-level policy when the lower-level primitive is
non-stationary. In this paper, we propose a novel hierarchical algorithm CRISP
to generate a curriculum of achievable subgoals for evolving lower-level
primitives using reinforcement learning and imitation learning. The lower level
primitive periodically performs data relabeling on a handful of expert
demonstrations using our primitive informed parsing approach to handle
non-stationarity. Since our approach uses a handful of expert demonstrations,
it is suitable for most robotic control tasks. Experimental evaluations on
complex robotic maze navigation and robotic manipulation environments show that
inducing hierarchical curriculum learning significantly improves sample
efficiency, and results in efficient goal conditioned policies for solving
temporally extended tasks. We perform real world robotic experiments on complex
manipulation tasks and demonstrate that CRISP consistently outperforms the
baselines
FALF ConvNets: Fatuous auxiliary loss based filter-pruning for efficient deep CNNs
Obtaining efficient Convolutional Neural Networks (CNNs) are imperative to enable their application for a wide variety of tasks (classification, detection, etc.). While several methods have been proposed to solve this problem, we propose a novel strategy for solving the same that is orthogonal to the strategies proposed so far. We hypothesize that if we add a fatuous auxiliary task, to a network which aims to solve a semantic task such as classification or detection, the filters devoted to solving this frivolous task would not be relevant for solving the main task of concern. These filters could be pruned and pruning these would not reduce the performance on the original task. We demonstrate that this strategy is not only successful, it in fact allows for improved performance for a variety of tasks such as object classification, detection and action recognition. An interesting observation is that the task needs to be fatuous so that any semantically meaningful filters would not be relevant for solving this task. We thoroughly evaluate our proposed approach on different architectures (LeNet, VGG-16, ResNet, Faster RCNN, SSD-512, C3D, and MobileNet V2) and datasets (MNIST, CIFAR, ImageNet, GTSDB, COCO, and UCF101) and demonstrate its generalizability through extensive experiments. Moreover, our compressed models can be used at run-time without requiring any special libraries or hardware. Our model compression method reduces the number of FLOPS by an impressive factor of 6.03X and GPU memory footprint by more than 17X for VGG-16, significantly outperforming other state-of-the-art filter pruning methods. We demonstrate the usability of our approach for 3D convolutions and various vision tasks such as object classification, object detection, and action recognition
Prevalence of subclinical hypothyroidism in patients of chronic liver disease
Background: Chronic liver disease (CLD) is a continuous process of inflammation, destruction, and regeneration of liver parenchyma, which leads to fibrosis and cirrhosis. Liver plays an essential physiological role in thyroid hormone activation and inactivation, transport, and metabolism, as well as the synthesis of thyroid binding globulin. A complex relationship exists between thyroid and liver in health and disease.
Methods: 103 patients of CLD were included in this study from December 2020 to September 2022. They were classified as per child Pugh scoring after clinical assessment and investigations. Thyroid function profile was measured for all the patients.
Results: Among 103 patients, 8 (7.76%) patients were having overt hypothyroidism and 28 (27.18%) patients had subclinical hypothyroidism, while 67 (65.04%) patients had normal thyroid profile levels. There was significant correlation between CTP class and hypothyroidism status of patient (p value <0.001) with 25 (56.81%) patients of CTP class C having subclinical hypothyroidism, while 3 (7.5%) patients of CTP class B had subclinical hypothyroidism and none patient of CTP class A had subclinical hypothyroidism.
Conclusions: Our study found that there was increased prevalence of subclinical hypothyroidism in CLD patients which increased with severity of CLD as assessed with CTP class
- …