39 research outputs found

    Dimensions and Challenges of the 10th Regional Saras Fair, Agartala: A Geographical Analysis

    Get PDF
    In post independence time one of the major emphases given by the Government of India was to eradicate the poverty of rural poor through different poverty alleviation programmes like Swarnjayanti Gram Swarozgar Yojana (SGSY). In SGSY scheme the rural people belonging to same economic background having a common desire to work together were organised in small informal groups to start employment generating activities are known as Self Help Groups (SHG). Ministry of Rural Development, Govt. of India organises annual sale cum exhibition fair in different regions to provide a platform to sale and promote the products produced by the SHG members. Regional Saras Fair, Agartala is an annual phenomenon since 2006 which is playing an important role for the folk artisans of Tripura to sale the products of rural cottage industries in the urban market. The present study tries to highlight the nature and dimension of Saras fair and socio-economic development of the SHG members. Also explain future strategies for development of the Saras fair as well as the participating SHGs.The methodologies that have taken to carry out this study is extensive literature review to gather knowledge about different aspects and dimensions of poverty alleviation programmes, collection of primary data by using structured schedule through stratified random sampling techniques, 86 number of participating SHGs interviewed. Secondary data has been collected from the District Rural Development Office and Govt. websites. Saras Fair have opened a new dimension in promotion of rural products and encouraged the participants and other stakeholders in the development of rural products which has to achieve poverty free nation. Thus, formation of stable market for the SHG products may solve the rural poverty problem. Keywords: Self Help Group, SGSY, Saras Fair, Rural Developmen

    Unveiling optimal operating conditions for an epoxy polymerization process using multi-objective evolutionary computation

    Get PDF
    The optimization of the epoxy polymerization process involves a number of conflicting objectives and more than twenty decision parameters. In this paper, the problem is treated truly as a multi-objective optimization problem and near-Pareto-optimal solutions corresponding to two and three objectives are found using the elitist non-dominated sorting GA or NSGA-II. Objectives, such as the number average molecular weight, polydispersity index and reaction time, are considered. The first two objectives are related to the properties of a polymer, whereas the third objective is related to productivity of the polymerization process. The decision variables are discrete addition quantities of various reactants e.g. the amount of addition for bisphenol-A (a monomer), sodium hydroxide and epichlorohydrin at different time steps, whereas the satisfaction of all species balance equations is treated as constraints. This study brings out a salient aspect of using an evolutionary approach to multi-objective problem solving. Important and useful patterns of addition of reactants are unveiled for different optimal trade-off solutions. The systematic approach of multi-stage optimization adopted here for finding optimal operating conditions for the epoxy polymerization process should further such studies on other chemical process and real-world optimization problems

    Quality > Quantity: Synthetic Corpora from Foundation Models for Closed-Domain Extractive Question Answering

    Full text link
    Domain adaptation, the process of training a model in one domain and applying it to another, has been extensively explored in machine learning. While training a domain-specific foundation model (FM) from scratch is an option, recent methods have focused on adapting pre-trained FMs for domain-specific tasks. However, our experiments reveal that either approach does not consistently achieve state-of-the-art (SOTA) results in the target domain. In this work, we study extractive question answering within closed domains and introduce the concept of targeted pre-training. This involves determining and generating relevant data to further pre-train our models, as opposed to the conventional philosophy of utilizing domain-specific FMs trained on a wide range of data. Our proposed framework uses Galactica to generate synthetic, ``targeted'' corpora that align with specific writing styles and topics, such as research papers and radiology reports. This process can be viewed as a form of knowledge distillation. We apply our method to two biomedical extractive question answering datasets, COVID-QA and RadQA, achieving a new benchmark on the former and demonstrating overall improvements on the latter. Code available at https://github.com/saptarshi059/CDQA-v1-Targetted-PreTraining/tree/main

    DeepliteRT: Computer Vision at the Edge

    Full text link
    The proliferation of edge devices has unlocked unprecedented opportunities for deep learning model deployment in computer vision applications. However, these complex models require considerable power, memory and compute resources that are typically not available on edge platforms. Ultra low-bit quantization presents an attractive solution to this problem by scaling down the model weights and activations from 32-bit to less than 8-bit. We implement highly optimized ultra low-bit convolution operators for ARM-based targets that outperform existing methods by up to 4.34x. Our operator is implemented within Deeplite Runtime (DeepliteRT), an end-to-end solution for the compilation, tuning, and inference of ultra low-bit models on ARM devices. Compiler passes in DeepliteRT automatically convert a fake-quantized model in full precision to a compact ultra low-bit representation, easing the process of quantized model deployment on commodity hardware. We analyze the performance of DeepliteRT on classification and detection models against optimized 32-bit floating-point, 8-bit integer, and 2-bit baselines, achieving significant speedups of up to 2.20x, 2.33x and 2.17x, respectively.Comment: Accepted at British Machine Vision Conference (BMVC) 202

    YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems

    Full text link
    We present YOLOBench, a benchmark comprised of 550+ YOLO-based object detection models on 4 different datasets and 4 different embedded hardware platforms (x86 CPU, ARM CPU, Nvidia GPU, NPU). We collect accuracy and latency numbers for a variety of YOLO-based one-stage detectors at different model scales by performing a fair, controlled comparison of these detectors with a fixed training environment (code and training hyperparameters). Pareto-optimality analysis of the collected data reveals that, if modern detection heads and training techniques are incorporated into the learning process, multiple architectures of the YOLO series achieve a good accuracy-latency trade-off, including older models like YOLOv3 and YOLOv4. We also evaluate training-free accuracy estimators used in neural architecture search on YOLOBench and demonstrate that, while most state-of-the-art zero-cost accuracy estimators are outperformed by a simple baseline like MAC count, some of them can be effectively used to predict Pareto-optimal detection models. We showcase that by using a zero-cost proxy to identify a YOLO architecture competitive against a state-of-the-art YOLOv8 model on a Raspberry Pi 4 CPU. The code and data are available at https://github.com/Deeplite/deeplite-torch-zo

    Optimization using ANN Surrogates with Optimal Topology and Sample Size

    Get PDF
    Industrial scale process modelling and optimiza tion of long chain branched polymer reaction network is currently an area of extensive research owing to the advantages and growing popularity of branched polymers. The highly complex nature of these reaction networks requires a large set of stiff ordinary differential equations to model them mathematically with adequate precision and accuracy. In such a scenario, where execution time of model is expensive, the idea of making the online optimization and control of these processes seems to be a near impossib le task. Catering to these problems in the ongoing research, the authors presented a novel work where the kinetic model of long chain branched poly vinyl acetate has been utilized to find the optimum processing con ditions of operation using Sobol sequence based ANN as meta models in a fast and highly efficient manner. The article presents a novel generic algorithm, which not only disables the heuristic approach of designing the ANN architecture but also allows the computationally expensive first principle m odel to determine the configuration of the ANN which can emulate it with maximum accuracy along with the size of training samples required. The use of such a fast and efficient Sobol based ANN as surrogate model obtained by the proposed algorithm m akes the optimization process 10 times faster as compared to a case where optimization is carried out with the expensive first principle model

    DELINEATING REGIONAL DIFFERENTIATION ON THE DEVELOPMENT OF THE RAILWAY INFRASTRUCTURE IN NORTHEAST INDIA THROUGH AN EFFICIENT SYNTHETIC INDICATOR

    Get PDF
    The north-eastern region of India presents intra-regional disparity, which is reflected in every aspect of development. The transport sector, especially railway transportation, is one of the important aspects, and the development of railway infrastructure seems to be very different in every region. The research question addressed in this study was “Which factors, geo-physical or socio-economic, influenced the variation in the level of railway development in Northeast India?” The aim of the study was to delineate regional differentiation on railway development in Northeast India and to analyse the reasons for different development patterns of railway lines among the north-eastern states. The research was based on secondary data collected from multiple sources, and the existing synthetic indicator was applied for the classification of eight states based on their railway infrastructural status. An alternative approach called the alternative synthetic indicator has been proposed and found to be more efficient than the existing synthetic indicator. The degree of inequality among the northeastern states by considering railway infrastructural variables was measured by plotting a Lorenz curve; the corresponding Gini coefficient specifies the unequal distribution of railway infrastructure among all the northeastern states. The causality of such unequal development has been analysed through a correlation test by defining the composite dimension index. The analysis revealed that all the externalities of regional inequality significantly influence the development of railway lines in northeastern states. Environmental determinism plays a crucial role in railway development in Northeast India, but political willingness is also crucial for creating an actual state of differentiation and will play a special role in the future

    Sodium Alginate and Gelatin Hydrogels: Viscosity Effect on Hydrophobic Drug Release

    Get PDF
    Blend of biodegradable hydrogels like sodium alginate/gelatin (SA/G) usually requires use of chemical cross-linkers to remain stable in aqueous media for drug delivery applications. This study targets the feasibility of having an entire spectrum of a model hydrophobic drug (piperine) release i.e. from burst to controlled release, by varying polymer viscosity and molecular weight of plasticizer with minimal use of cross-linkers. Swelling study, drug-polymer interactions and morphology analysis reveal the impact of viscosity variation on polymer matrix

    DeepGEMM: Accelerated Ultra Low-Precision Inference on CPU Architectures using Lookup Tables

    Full text link
    A lot of recent progress has been made in ultra low-bit quantization, promising significant improvements in latency, memory footprint and energy consumption on edge devices. Quantization methods such as Learned Step Size Quantization can achieve model accuracy that is comparable to full-precision floating-point baselines even with sub-byte quantization. However, it is extremely challenging to deploy these ultra low-bit quantized models on mainstream CPU devices because commodity SIMD (Single Instruction, Multiple Data) hardware typically supports no less than 8-bit precision. To overcome this limitation, we propose DeepGEMM, a lookup table based approach for the execution of ultra low-precision convolutional neural networks on SIMD hardware. The proposed method precomputes all possible products of weights and activations, stores them in a lookup table, and efficiently accesses them at inference time to avoid costly multiply-accumulate operations. Our 2-bit implementation outperforms corresponding 8-bit integer kernels in the QNNPACK framework by up to 1.74x on x86 platforms
    corecore