13 research outputs found

    TabSynDex: A Universal Metric for Robust Evaluation of Synthetic Tabular Data

    Full text link
    Synthetic tabular data generation becomes crucial when real data is limited, expensive to collect, or simply cannot be used due to privacy concerns. However, producing good quality synthetic data is challenging. Several probabilistic, statistical, and generative adversarial networks (GANs) based approaches have been presented for synthetic tabular data generation. Once generated, evaluating the quality of the synthetic data is quite challenging. Some of the traditional metrics have been used in the literature but there is lack of a common, robust, and single metric. This makes it difficult to properly compare the effectiveness of different synthetic tabular data generation methods. In this paper we propose a new universal metric, TabSynDex, for robust evaluation of synthetic data. TabSynDex assesses the similarity of synthetic data with real data through different component scores which evaluate the characteristics that are desirable for "high quality" synthetic data. Being a single score metric, TabSynDex can also be used to observe and evaluate the training of neural network based approaches. This would help in obtaining insights that was not possible earlier. Further, we present several baseline models for comparative analysis of the proposed evaluation metric with existing generative models

    AI-enabled remote monitoring of vital signs for COVID-19: methods, prospects and challenges

    Get PDF
    The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques

    COVID-19 Associated Mucormycosis::A Review of an Emergent Epidemic Fungal Infection in 3 Era of COVID-19 Pandemic

    Get PDF
    At a time when the COVID-19's second wave is still picking up in countries like India, a number of reports describe the potential association with a rise in the number of cases of mucormycosis, commonly known as the black fungus. This fungal infection has been around for centuries and affects those people whose immunity has been compromised due to severe health conditions. In this article, we provide a detailed overview of mucormycosis and discuss how COVID-19 could have caused a sudden spike in an otherwise rare disease in countries like India. The article discusses the various symptoms of the disease, class of people most vulnerable to this infection, preventive measures to avoid the disease, and various treatments that exist in clinical practice and research to manage the disease

    LVRNet: Lightweight Image Restoration for Aerial Images under Low Visibility (Student Abstract)

    No full text
    Learning to recover clear images from images having a combination of degrading factors is a challenging task. That being said, autonomous surveillance in low visibility conditions caused by high pollution/smoke, poor air quality index, low light, atmospheric scattering, and haze during a blizzard, etc, becomes even more important to prevent accidents. It is thus crucial to form a solution that can not only result in a high-quality image but also which is efficient enough to be deployed for everyday use. However, the lack of proper datasets available to tackle this task limits the performance of the previous methods proposed. To this end, we generate the LowVis-AFO dataset, containing 3647 paired dark-hazy and clear images. We also introduce a new lightweight deep learning model called Low-Visibility Restoration Network (LVRNet). It outperforms previous image restoration methods with low latency, achieving a PSNR value of 25.744 and an SSIM of 0.905, hence making our approach scalable and ready for practical use

    Improving Aerial Instance Segmentation in the Dark with Self-Supervised Low Light Enhancement (Student Abstract)

    No full text
    Low light conditions in aerial images adversely affect the performance of several vision based applications. There is a need for methods that can efficiently remove the low light attributes and assist in the performance of key vision tasks. In this work, we propose a new method that is capable of enhancing the low light image in a self-supervised fashion, and sequentially apply detection and segmentation tasks in an end-to-end manner. The proposed method occupies a very small overhead in terms of memory and computational power over the original algorithm and delivers superior results. Additionally, we propose the generation of a new low light aerial dataset using GANs, which can be used to evaluate vision based networks for similar adverse conditions

    Background Invariant Faster Motion Modeling for Drone Action Recognition

    No full text
    Visual data collected from drones has opened a new direction for surveillance applications and has recently attracted considerable attention among computer vision researchers. Due to the availability and increasing use of the drone for both public and private sectors, it is a critical futuristic technology to solve multiple surveillance problems in remote areas. One of the fundamental challenges in recognizing crowd monitoring videos’ human action is the precise modeling of an individual’s motion feature. Most state-of-the-art methods heavily rely on optical flow for motion modeling and representation, and motion modeling through optical flow is a time-consuming process. This article underlines this issue and provides a novel architecture that eliminates the dependency on optical flow. The proposed architecture uses two sub-modules, FMFM (faster motion feature modeling) and AAR (accurate action recognition), to accurately classify the aerial surveillance action. Another critical issue in aerial surveillance is a deficiency of the dataset. Out of few datasets proposed recently, most of them have multiple humans performing different actions in the same scene, such as a crowd monitoring video, and hence not suitable for directly applying to the training of action recognition models. Given this, we have proposed a novel dataset captured from top view aerial surveillance that has a good variety in terms of actors, daytime, and environment. The proposed architecture has shown the capability to be applied in different terrain as it removes the background before using the action recognition model. The proposed architecture is validated through the experiment with varying investigation levels and achieves a remarkable performance of 0.90 validation accuracy in aerial action recognition

    ISDNet: AI-enabled Instance Segmentation of Aerial Scenes for Smart Cities

    No full text
    Aerial scenes captured by UAVs have immense potential in IoT applications related to urban surveillance, road and building segmentation, land cover classification, and so on, which are necessary for the evolution of smart cities. The advancements in deep learning have greatly enhanced visual understanding, but the domain of aerial vision remains largely unexplored. Aerial images pose many unique challenges for performing proper scene parsing such as high-resolution data, small-scaled objects, a large number of objects in the camera view, dense clustering of objects, background clutter, and so on, which greatly hinder the performance of the existing deep learning methods. In this work, we propose ISDNet (Instance Segmentation and Detection Network), a novel network to perform instance segmentation and object detection on visual data captured by UAVs. This work enables aerial image analytics for various needs in a smart city. In particular, we use dilated convolutions to generate improved spatial context, leading to better discrimination between foreground and background features. The proposed network efficiently reuses the segment-mask features by propagating them from early stages using residual connections. Furthermore, ISDNet makes use of effective anchors to accommodate varying object scales and sizes. The proposed method obtains state-of-the-art results in the aerial context.This work is supported by BITS Additional Competitive Research Grant funding under Project Grant File no. PLN/AD/2018-19/5 for the Project titled “Disaster Monitoring from Aerial Imagery using Deep Learning.” Authors’ addresses: P. Garg and V. Chamola, Dept. of EEE, BITS Pilani, India; emails: [email protected], [email protected]; A. S. Chakravarthy and P. Narang, Dept. of CSIS, BITS Pilani, India; emails: [email protected], [email protected]; M. Mandal, Dept. of CSE, MNIT Jaipur, India; email: [email protected]; M. Guizani, Dept. of CSE, Qatar University, Qatar; email: [email protected]. Updated author affiliations: VINAY CHAMOLA, Dept. of EEE & APPCAIR, BITS Pilani, India; MURARI MANDAL, Dept. of CSE, IIIT Kota, India. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2021 Association for Computing Machinery. 1533-5399/2021/08-ART66 $15.00 https://doi.org/10.1145/341820
    corecore