2,110 research outputs found

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor

    Impact of Heavy Metals in Ambient Air in Insulin Resistance of Shipyard Welders in Northern Taiwan

    Get PDF
    Exposure to metals poses potential health risks, including insulin resistance (IR), to those exposed to them in excess. Limited studies have examined such risks in occupational workers, including welders, and these have yielded inconsistent results. Thus, we examined the associations between exposure to welding metals and IR in welders. We recruited 78 welders and 75 administrative staff from a shipyard located in northern Taiwan. Personal exposure to heavy metals, including chromium (Cr), manganese (Mn), iron (Fe), nickel (Ni), copper (Cu), zinc (Zn), and cadmium (Cd), was monitored through particulate matter with an aerodynamic diameter of less than 2.5 μm (PM2.5) and urine analysis by inductively coupled plasma mass spectrometry (ICP–MS). After each participant fasted overnight, blood samples were collected and analyzed for IR assessment through updated homeostasis model assessment (HOMA2) modeling. Air sampling in the personal breathing zone was performed during a Monday shift prior to the blood and urine sample collection the following morning. The welders’ median personal Cr, Mn, Fe, Ni, Cu, and Zn airborne PM2.5 levels and urinary Cd levels were significantly higher than those of the administrative staff. After adjustment for covariates, logarithmic PM2.5-Mn, PM2.5-Fe, PM2.5-Cu, and PM2.5-Zn levels were positively correlated with logarithmic fasting plasma glucose (P-FGAC) levels (PM2.5-Mn: β = 0.0105, 95% C.I.: 0.0027–0.0183; PM2.5-Fe: β = 0.0127, 95% C.I.: 0.0027–0.0227; PM2.5-Cu: β = 0.0193, 95% C.I.: 0.0032–0.0355; PM2.5-Zn: β = 0.0132, 95% C.I.: 0.0005–0.0260). Logarithmic urinary Zn was positively correlated with logarithmic serum insulin and HOMA2-IR levels and negatively correlated with logarithmic HOMA2-insulin sensitivity (%S; βinsulin = 0.2171, 95% C.I.: 0.0025–0.4318; βIR = 0.2179, 95% C.I.: 0.0027–0.4330; β%S = −0.2180, 95% C.I.: −0.4334 to −0.0026). We observed that glucose homeostasis was disrupted by Mn, Fe, Cu, and Zn exposure through increasing P-FGAC and IR levels in shipyard welders

    GPT-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions

    Full text link
    There is growing interest in systems that generate captions for scientific figures. However, assessing these systems output poses a significant challenge. Human evaluation requires academic expertise and is costly, while automatic evaluation depends on often low-quality author-written captions. This paper investigates using large language models (LLMs) as a cost-effective, reference-free method for evaluating figure captions. We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3,600 scientific figure captions, both original and machine-made, for 600 arXiv figures. We then prompted LLMs like GPT-4 and GPT-3 to score (1-6) each caption based on its potential to aid reader understanding, given relevant context such as figure-mentioning paragraphs. Results show that GPT-4, used as a zero-shot evaluator, outperformed all other models and even surpassed assessments made by Computer Science and Informatics undergraduates, achieving a Kendall correlation score of 0.401 with Ph.D. students rankingsComment: To Appear in EMNLP 2023 Finding

    Tailoring excitonic states of van der Waals bilayers through stacking configuration, band alignment and valley-spin

    Full text link
    Excitons in monolayer semiconductors have large optical transition dipole for strong coupling with light field. Interlayer excitons in heterobilayers, with layer separation of electron and hole components, feature large electric dipole that enables strong coupling with electric field and exciton-exciton interaction, at the cost that the optical dipole is substantially quenched (by several orders of magnitude). In this letter, we demonstrate the ability to create a new class of excitons in transition metal dichalcogenide (TMD) hetero- and homo-bilayers that combines the advantages of monolayer- and interlayer-excitons, i.e. featuring both large optical dipole and large electric dipole. These excitons consist of an electron that is well confined in an individual layer, and a hole that is well extended in both layers, realized here through the carrier-species specific layer-hybridization controlled through the interplay of rotational, translational, band offset, and valley-spin degrees of freedom. We observe different species of such layer-hybridized valley excitons in different heterobilayer and homobilayer systems, which can be utilized for realizing strongly interacting excitonic/polaritonic gases, as well as optical quantum coherent controls of bidirectional interlayer carrier transfer either with upper conversion or down conversion in energy

    Summaries as Captions: Generating Figure Captions for Scientific Documents with Automated Text Summarization

    Full text link
    Good figure captions help paper readers understand complex scientific figures. Unfortunately, even published papers often have poorly written captions. Automatic caption generation could aid paper writers by providing good starting captions that can be refined for better quality. Prior work often treated figure caption generation as a vision-to-language task. In this paper, we show that it can be more effectively tackled as a text summarization task in scientific documents. We fine-tuned PEGASUS, a pre-trained abstractive summarization model, to specifically summarize figure-referencing paragraphs (e.g., "Figure 3 shows...") into figure captions. Experiments on large-scale arXiv figures show that our method outperforms prior vision methods in both automatic and human evaluations. We further conducted an in-depth investigation focused on two key challenges: (i) the common presence of low-quality author-written captions and (ii) the lack of clear standards for good captions. Our code and data are available at: https://github.com/Crowd-AI-Lab/Generating-Figure-Captions-as-a-Text-Summarization-Task.Comment: Accepted by INLG-202
    • …
    corecore