56 research outputs found

    Master of Science

    Get PDF
    thesisCoal has an inherent tendency to combust in presence of oxygen. This phenomena is termed as spontaneous combustion of coal. Pressure balancing is a tool that can be used to solve the problem. Pressure balancing is the technique of equalizing pressure differentials between two areas such as mine gob and its surroundings so that the flow of air or the ingress of oxygen to the cave area is reduced or eliminated. Critical factors affecting spontaneous combustion of coal are thoroughly evaluated. These factors include not only the quality of coal but also geological features and mining methods used to extract the coal. The propensity of coal to self-heating can be determined by using a software such as SPONCOM developed for this purpose. There are two types of pressure balancing namely passive and active. Passive balancing is desirable, as it can be achieved using passive means such as regulators and fans. Dynamic pressure balancing is another type of passive pressure balancing in which chambers are established and pressurized using airflow existing in the mines. If a passive balancing technique is not adequate, then an active pressure balancing could be used. Inert gas is used to pressurize the chamber in an active pressure balancing method. The University of Utah ventilation laboratory model was upgraded to include an atmospheric monitoring system. The model was used to conduct several experiments to equalize pressure differential across the simulate gob. All the vital ventilation control parameters can be observed and recorded using this system. Furthermore, a sub-routine was developed to accomplish the pressure balancing system automatically. Three underground coal mines have been visited as part of this study: one room and pillar and two long wall mines. The objective was to conduct pressure-quantity surveys, and to determine the pressure differentials across the stoppings used to isolate the worked-areas. The results of the surveys conducted in the room and pillar mine are presented and discussed in this study. The study concludes with an inventory of hazards related to spontaneous combustion, control measures, and risk analyses to identify the critical factors

    GOPro: Generate and Optimize Prompts in CLIP using Self-Supervised Learning

    Full text link
    Large-scale foundation models, such as CLIP, have demonstrated remarkable success in visual recognition tasks by embedding images in a semantically rich space. Self-supervised learning (SSL) has also shown promise in improving visual recognition by learning invariant features. However, the combination of CLIP with SSL is found to face challenges due to the multi-task framework that blends CLIP's contrastive loss and SSL's loss, including difficulties with loss weighting and inconsistency among different views of images in CLIP's output space. To overcome these challenges, we propose a prompt learning-based model called GOPro, which is a unified framework that ensures similarity between various augmented views of input images in a shared image-text embedding space, using a pair of learnable image and text projectors atop CLIP, to promote invariance and generalizability. To automatically learn such prompts, we leverage the visual content and style primitives extracted from pre-trained CLIP and adapt them to the target task. In addition to CLIP's cross-domain contrastive loss, we introduce a visual contrastive loss and a novel prompt consistency loss, considering the different views of the images. GOPro is trained end-to-end on all three loss objectives, combining the strengths of CLIP and SSL in a principled manner. Empirical evaluations demonstrate that GOPro outperforms the state-of-the-art prompting techniques on three challenging domain generalization tasks across multiple benchmarks by a significant margin. Our code is available at https://github.com/mainaksingha01/GOPro.Comment: Accepted at BMVC 202

    Economics of Rice Production in Pyuthan District of Nepal

    Full text link
    A research was conducted at Pyuthan district in order to access the profitability of rice production in Pyuthan during the summer season of 2018-2019. Altogether of 70 respondents were selected randomly and surveyed with semi-structured interview schedule. The results revealed that the average land holding was 0.45 hectare, and the average rice cultivation area was 0.34 hectare. On the basis of average rice cultivation area, farmers were categorized as small (39) and large (31). The cost and return was calculated among both the category. t- test was used to compare the mean costs of inputs between small and large farmers. Cost for agronomic operations was found far higher (more than 70%) in both the category in compared to the cost of inputs. Contribution of rice grains and straw to overall return was 72.65% and 27.35% respectively. Benefit Cost ratio was found greater among large farmers. The average B:C ratio was 1.51, which was fairly higher than 1.14 in Dang district indicating the investment of rice production is expected to deliver a positive net return to the farmers of the study area. In a nutshell, rice cultivation is an important enterprise that should be encouraged, considering the fact that it is a major staple crop

    C-SAW: Self-Supervised Prompt Learning for Image Generalization in Remote Sensing

    Full text link
    We focus on domain and class generalization problems in analyzing optical remote sensing images, using the large-scale pre-trained vision-language model (VLM), CLIP. While contrastively trained VLMs show impressive zero-shot generalization performance, their effectiveness is limited when dealing with diverse domains during training and testing. Existing prompt learning techniques overlook the importance of incorporating domain and content information into the prompts, which results in a drop in performance while dealing with such multi-domain data. To address these challenges, we propose a solution that ensures domain-invariant prompt learning while enhancing the expressiveness of visual features. We observe that CLIP's vision encoder struggles to identify contextual image information, particularly when image patches are jumbled up. This issue is especially severe in optical remote sensing images, where land-cover classes exhibit well-defined contextual appearances. To this end, we introduce C-SAW, a method that complements CLIP with a self-supervised loss in the visual space and a novel prompt learning technique that emphasizes both visual domain and content-specific features. We keep the CLIP backbone frozen and introduce a small set of projectors for both the CLIP encoders to train C-SAW contrastively. Experimental results demonstrate the superiority of C-SAW across multiple remote sensing benchmarks and different generalization tasks.Comment: Accepted in ACM ICVGIP 202

    Economics of rice production in Pyuthan district of Nepal

    Get PDF
    A research was conducted at Pyuthan district in order to access the profitability of rice production in Pyuthan during the summer season of 2018-2019. Altogether of 70 respondents were selected randomly and surveyed with semi-structured interview schedule. The results revealed that the average land holding was 0.45 hectare, and the average rice cultivation area was 0.34 hectare. On the basis of average rice cultivation area, farmers were categorized as small (39) and large (31). The cost and return was calculated among both the category. t- test was used to compare the mean costs of inputs between small and large farmers. Cost for agronomic operations was found far higher (more than 70%) in both the category in compared to the cost of inputs. Contribution of rice grains and straw to overall return was 72.65% and 27.35% respectively. Benefit Cost ratio was found greater among large farmers. The average B:C ratio was 1.51, which was fairly higher than 1.14 in Dang district indicating the investment of rice production is expected to deliver a positive net return to the farmers of the study area. In a nutshell, rice cultivation is an important enterprise that should be encouraged, considering the fact that it is a major staple crop

    HAVE-Net: Hallucinated Audio-Visual Embeddings for Few-Shot Classification with Unimodal Cues

    Full text link
    Recognition of remote sensing (RS) or aerial images is currently of great interest, and advancements in deep learning algorithms added flavor to it in recent years. Occlusion, intra-class variance, lighting, etc., might arise while training neural networks using unimodal RS visual input. Even though joint training of audio-visual modalities improves classification performance in a low-data regime, it has yet to be thoroughly investigated in the RS domain. Here, we aim to solve a novel problem where both the audio and visual modalities are present during the meta-training of a few-shot learning (FSL) classifier; however, one of the modalities might be missing during the meta-testing stage. This problem formulation is pertinent in the RS domain, given the difficulties in data acquisition or sensor malfunctioning. To mitigate, we propose a novel few-shot generative framework, Hallucinated Audio-Visual Embeddings-Network (HAVE-Net), to meta-train cross-modal features from limited unimodal data. Precisely, these hallucinated features are meta-learned from base classes and used for few-shot classification on novel classes during the inference phase. The experimental results on the benchmark ADVANCE and AudioSetZSL datasets show that our hallucinated modality augmentation strategy for few-shot classification outperforms the classifier performance trained with the real multimodal information at least by 0.8-2%.Comment: 8 Page, 2 Figures, 2 Tables, Accepted in Adapting to Change: Reliable Multimodal Learning Across Domains Workshop, ECML PKDD 202

    StyLIP: Multi-Scale Style-Conditioned Prompt Learning for CLIP-based Domain Generalization

    Full text link
    Large-scale foundation models (e.g., CLIP) have shown promising zero-shot generalization performance on downstream tasks by leveraging carefully designed language prompts. However, despite their success, most prompt learning techniques tend to underperform in the presence of domain shift. Our study addresses this problem and, to improve CLIP's generalization ability across domains, proposes \textsc{StyLIP}, a novel approach for Domain Generalization (DG) based on a domain-agnostic prompt learning strategy. In the absence of explicit domain knowledge, we aim to disentangle the visual style and the content information extracted from the pre-trained CLIP in the prompts so they can be effortlessly adapted to novel domains during inference. Furthermore, we consider a set of style projectors to learn the prompt tokens directly from these multi-scale style features, and the generated prompt embeddings are later fused with the multi-scale visual features learned through a content projector. The projectors are contrastively trained, given CLIP's frozen vision and text encoders. We present extensive experiments in five different DG settings on multiple benchmarks, demonstrating that \textsc{StyLIP} consistently outperforms the relevant state-of-the-art methods.Comment: 23 pages, 7 figures, 9 table
    • …
    corecore