88 research outputs found
DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs
We present a novel deep learning architecture for fusing static
multi-exposure images. Current multi-exposure fusion (MEF) approaches use
hand-crafted features to fuse input sequence. However, the weak hand-crafted
representations are not robust to varying input conditions. Moreover, they
perform poorly for extreme exposure image pairs. Thus, it is highly desirable
to have a method that is robust to varying input conditions and capable of
handling extreme exposure without artifacts. Deep representations have known to
be robust to input conditions and have shown phenomenal performance in a
supervised setting. However, the stumbling block in using deep learning for MEF
was the lack of sufficient training data and an oracle to provide the
ground-truth for supervision. To address the above issues, we have gathered a
large dataset of multi-exposure image stacks for training and to circumvent the
need for ground truth images, we propose an unsupervised deep learning
framework for MEF utilizing a no-reference quality metric as loss function. The
proposed approach uses a novel CNN architecture trained to learn the fusion
operation without reference ground truth image. The model fuses a set of common
low level features extracted from each image to generate artifact-free
perceptually pleasing results. We perform extensive quantitative and
qualitative evaluation and show that the proposed technique outperforms
existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201
LowDINO -- A Low Parameter Self Supervised Learning Model
This research aims to explore the possibility of designing a neural network
architecture that allows for small networks to adopt the properties of huge
networks, which have shown success in self-supervised learning (SSL), for all
the downstream tasks like image classification, segmentation, etc. Previous
studies have shown that using convolutional neural networks (ConvNets) can
provide inherent inductive bias, which is crucial for learning representations
in deep learning models. To reduce the number of parameters, attention
mechanisms are utilized through the usage of MobileViT blocks, resulting in a
model with less than 5 million parameters. The model is trained using
self-distillation with momentum encoder and a student-teacher architecture is
also employed, where the teacher weights use vision transformers (ViTs) from
recent SOTA SSL models. The model is trained on the ImageNet1k dataset. This
research provides an approach for designing smaller, more efficient neural
network architectures that can perform SSL tasks comparable to heavy model
Continual Learning with Dependency Preserving Hypernetworks
Humans learn continually throughout their lifespan by accumulating diverse
knowledge and fine-tuning it for future tasks. When presented with a similar
goal, neural networks suffer from catastrophic forgetting if data distributions
across sequential tasks are not stationary over the course of learning. An
effective approach to address such continual learning (CL) problems is to use
hypernetworks which generate task dependent weights for a target network.
However, the continual learning performance of existing hypernetwork based
approaches are affected by the assumption of independence of the weights across
the layers in order to maintain parameter efficiency. To address this
limitation, we propose a novel approach that uses a dependency preserving
hypernetwork to generate weights for the target network while also maintaining
the parameter efficiency. We propose to use recurrent neural network (RNN)
based hypernetwork that can generate layer weights efficiently while allowing
for dependencies across them. In addition, we propose novel regularisation and
network growth techniques for the RNN based hypernetwork to further improve the
continual learning performance. To demonstrate the effectiveness of the
proposed methods, we conducted experiments on several image classification
continual learning tasks and settings. We found that the proposed methods based
on the RNN hypernetworks outperformed the baselines in all these CL settings
and tasks.Comment: Paper got accepted in IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) 202
Fabrication and Evaluation of Low Density Glass-Epoxy Composites for Microwave Absorption Applications
In the present work, fabrication and evaluation of low density glass – epoxy (LDGE) composites suitable for absorbing minimum 80 per cent of incident microwave energy in 8 GHz to 12 GHz (X-band) is reported. LDGE composites having different densities were fabricated using a novel method of partially replacing conventional S-glass fabric with low density glass (LDG) layers as the reinforcement materials. Flexural strength, inter laminar shear strength and impact strength of the prepared LDGE composites were evaluated and compared with conventional High density glass-epoxy (HDGE) composites to understand the changes in these properties due to replacement of S-glass fabrics with LDG layers. To convert LDGE structures to radar absorbing structures controlled quantities of milled carbon fibers were impregnated as these conducting milled carbon fibers can act as dielectric lossy materials which could absorb the incident microwave energy by interfacial polarisation. Electromagnetic properties namely loss tangent and reflection loss of carbon fiber impregnated LDGE composites were evaluated in 8 GHz -12 GHz frequency region and compared with HDGE composites. It was observed that both LDGE and HDGE composites have shown loss tangent values more than 1.1 and minimum 80 per cent absorption of incident microwave energy. Thus the results indicates that, LDGE composites can show EM properties on par with HDGE composites. Furthermore these LDGE composite could successfully withstand the low velocity impacts (4.5 m/s) with 50 J incident energy. Due to their ability to show good mechanical properties and light weight, LDGE composites can be used as a replacement for conventional HDGE composites to realise radar absorbing structures
Pull-out Behaviour of Hooked End Steel Fibres Embedded in Ultra-high Performance Mortar with Various W/B Ratios
This paper presents the fibre-matrix interfacial properties of hooked end steel fibres embedded in ultra-high performance mortars with various water/binder (W/B) ratios. The principle objective was to improve bond behaviour in terms of bond strength by reducing the (W/B) ratio to a minimum. Results show that a decrease in W/B ratio has a significant effect on the bondslip behaviour of both types of 3D fibres, especially when the W/B ratio was reduced from 0.25 to 0.15. Furthermore, the optimization in maximizing pullout load and total pullout work is found to be more prominent for the 3D fibres with a larger diameter than for fibres with a smaller diameter. On the contrary, increasing the embedded length of the 3D fibres did not result in an improvement on the maximum pullout load, but increase in the total pullout work
Collision Avoidance Device For Visually Impaired (C.A.D.V.I)
Abstract: The white cane is the most successful and widely used travel aid for the blind. This purely mechanical device is used to detect obstacles on the ground, uneven surfaces, holes, steps, and other hazards. The main problem with the white cane is that users must be trained. In addition, this device requires the user to actively scan the small area ahead of him/her, and it cannot detect obstacles beyond its reach of 1- 2 m. Another drawback of the white cane is that obstacles can be detected only by contact. This can become inconvenient to the user and the people around the user. Guide dogs are very capable guides for the blind, but they require extensive training as well, and are extremely expensive. Collision Avoidance Device for the Visually Impaired is a hands-free and a hassle-free pedestrian navigation system. It integrates several technologies including wearable computers, image processing, audio processing and sound navigation and ranging. This device focuses on bringing about an approach which would make a visually impaired person to walk through busy roads and help identify obstacles without any trouble. The device uses a digital camera to capture the image frames directly in front of the user, and the processor implements image processing to determine the obstacle and a set of vibrational motors warns the user. The system also provides audio response. The sonar sensors detect obstacles in the user’s immediate vicinity. Upon detection, the vibrational motors caution him/her regarding the presence of obstacles. Image processing is used to provide the lateral distance between the obstacle and the user, so as to provide distance perception. Being a real time system, it accounts for real time changes by processing on current frames and is reactive by providing instant responses
IoT - based Sustained Belt for Visually Impaired People
This paper is an IoT based Sustained Belt for Visually impaired people, which uses Raspberry Pi for image identification and image processing techniques to help a blind person to identify and rectify people and things in front of them and help them navigate everyday life and have a better lifestyle like any other common man. The key techniques that will be used are real-time images and object recognition as well as image processing. First would be an image, object recognition is to capture the image and then rectify the image, rectifying the image is done through image processing and then the rectified image is converted as audio output. The scope of this paper is to scale up a level where a blind person has a normal lifestyle, just like any other normal person, right without hustles or anything and as well as they can use this as his adjacent person, where this has inbuilt GPS and it is a mini computer as well it can be also useful for general questions, usually blind people when they can’t see things they have a lot of questions and be curious, it will be easier to know about them. What differentiates our paper from usual blind people’s normal stick is that the stick helps them not to crash anything but we are not focused on that, what we focused on is to have a very normal lifestyle like normal perso
A carbon market sensitive optimization model for integrated forward–reverse logistics
Globalized supply chains, volatile energy and material prices, increased carbon regulations and competitive marketing pressure for environmental sustainability are driving supply chain decision makers to reduce carbon emissions. Enterprises face the necessity and the challenge of implementing strategies to reduce their supply chain environmental impact in order to remain competitive. One of the most important strategic issues in this context is the configuration of the logistics network. The decision concerning the design of an optimal network of the supply chain plays a vital role in determining the total carbon footprint across the supply chain and also the total cost. Therefore, the logistics network should be designed in a way that it could reduce both the cost and the carbon footprint across the supply chain. In this context, this research proposes a quantitative optimization model for integrated forward–reverse logistics with carbon-footprint considerations, by integrating the carbon emission into a quantitative operational decision-making model with regard to facility layout decisions. The proposed research incorporates carbon emission parameters with various decision variables and modifies traditional integrated forward/reverse logistics model into decision-making quantitative operational model, minimizing both the total cost and the carbon footprint. The proposed model investigates the extent to which carbon reduction requirements can be addressed under a particular set of parameters such as customer demand, rate of return of products etc., by selecting proper policy as an alternative to the costly investment in carbon-reducing technologies. To solve the quantitative model, this research implements a modified and efficient forest data structure to derive the optimal network configuration, minimizing both the cost and the total carbon footprint of the network. A comparative analysis shows the outperformance of the proposed approach over the conventional Genetic Algorithm (GA) for large problem sizes
- …