197 research outputs found

    Exploring Fairness in Pre-trained Visual Transformer based Natural and GAN Generated Image Detection Systems and Understanding the Impact of Image Compression in Fairness

    Full text link
    It is not only sufficient to construct computational models that can accurately classify or detect fake images from real images taken from a camera, but it is also important to ensure whether these computational models are fair enough or produce biased outcomes that can eventually harm certain social groups or cause serious security threats. Exploring fairness in forensic algorithms is an initial step towards correcting these biases. Since visual transformers are recently being widely used in most image classification based tasks due to their capability to produce high accuracies, this study tries to explore bias in the transformer based image forensic algorithms that classify natural and GAN generated images. By procuring a bias evaluation corpora, this study analyzes bias in gender, racial, affective, and intersectional domains using a wide set of individual and pairwise bias evaluation measures. As the generalizability of the algorithms against image compression is an important factor to be considered in forensic tasks, this study also analyzes the role of image compression on model bias. Hence to study the impact of image compression on model bias, a two phase evaluation setting is followed, where a set of experiments is carried out in the uncompressed evaluation setting and the other in the compressed evaluation setting

    A Robust Approach Towards Distinguishing Natural and Computer Generated Images using Multi-Colorspace fused and Enriched Vision Transformer

    Full text link
    The works in literature classifying natural and computer generated images are mostly designed as binary tasks either considering natural images versus computer graphics images only or natural images versus GAN generated images only, but not natural images versus both classes of the generated images. Also, even though this forensic classification task of distinguishing natural and computer generated images gets the support of the new convolutional neural networks and transformer based architectures that can give remarkable classification accuracies, they are seen to fail over the images that have undergone some post-processing operations usually performed to deceive the forensic algorithms, such as JPEG compression, gaussian noise, etc. This work proposes a robust approach towards distinguishing natural and computer generated images including both, computer graphics and GAN generated images using a fusion of two vision transformers where each of the transformer networks operates in different color spaces, one in RGB and the other in YCbCr color space. The proposed approach achieves high performance gain when compared to a set of baselines, and also achieves higher robustness and generalizability than the baselines. The features of the proposed model when visualized are seen to obtain higher separability for the classes than the input image features and the baseline features. This work also studies the attention map visualizations of the networks of the fused model and observes that the proposed methodology can capture more image information relevant to the forensic task of classifying natural and generated images

    Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models

    Full text link
    Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human generated data deluge along with self-supervised learning strategy helps to accelerate the success of large PLMs in language generation, language understanding, etc. But at the same time, latent historical bias/unfairness in human minds towards a particular gender, race, etc., encoded unintentionally/intentionally into the corpora harms and questions the utility and efficacy of large PLMs in many real-world applications, particularly for the protected groups. In this paper, we present an extensive investigation towards understanding the existence of "Affective Bias" in large PLMs to unveil any biased association of emotions such as anger, fear, joy, etc., towards a particular gender, race or religion with respect to the downstream task of textual emotion detection. We conduct our exploration of affective bias from the very initial stage of corpus level affective bias analysis by searching for imbalanced distribution of affective words within a domain, in large scale corpora that are used to pre-train and fine-tune PLMs. Later, to quantify affective bias in model predictions, we perform an extensive set of class-based and intensity-based evaluations using various bias evaluation corpora. Our results show the existence of statistically significant affective bias in the PLM based emotion detection systems, indicating biased association of certain emotions towards a particular gender, race, and religion

    REDAffectiveLM: Leveraging Affect Enriched Embedding and Transformer-based Neural Language Model for Readers' Emotion Detection

    Full text link
    Technological advancements in web platforms allow people to express and share emotions towards textual write-ups written and shared by others. This brings about different interesting domains for analysis; emotion expressed by the writer and emotion elicited from the readers. In this paper, we propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM. Within state-of-the-art NLP tasks, it is well understood that utilizing context-specific representations from transformer-based pre-trained language models helps achieve improved performance. Within this affective computing task, we explore how incorporating affective information can further enhance performance. Towards this, we leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention. For empirical evaluation, we procure a new dataset REN-20k, besides using RENh-4k and SemEval-2007. We evaluate the performance of our REDAffectiveLM rigorously across these datasets, against a vast set of state-of-the-art baselines, where our model consistently outperforms baselines and obtains statistically significant results. Our results establish that utilizing affect enriched representation along with context-specific representation within a neural architecture can considerably enhance readers' emotion detection. Since the impact of affect enrichment specifically in readers' emotion detection isn't well explored, we conduct a detailed analysis over affect enriched Bi-LSTM+Attention using qualitative and quantitative model behavior evaluation techniques. We observe that compared to conventional semantic embedding, affect enriched embedding increases ability of the network to effectively identify and assign weightage to key terms responsible for readers' emotion detection

    Azimuthal and polar anchoring energies of aligning layers structured by nonlinear laser lithography

    Get PDF
    In spite of the fact that there are different techniques in the creation of the high-quality liquid crystals (LCs) alignment by means of various surfaces, the azimuthal and polar anchoring energies as well as the pre-tilt angle are important parameters to all of them. Here, the modified by a certain manner aligning layers, previously formed by nonlinear laser lithography (NLL), having high-quality nano-periodic grooves on Ti surfaces, recently proposed for LC alignment was studied. The change of the scanning speed of NLL in the process of nano-structured Ti surfaces and their further modification by means of ITO-coating, and deposition of polyimide film has enabled different aligning layers, whose main characteristics, namely azimuthal and polar anchoring energies, were measured. For the modified aligning layers, the dependencies of the twist and pre-tilt angles for LC cells filled by nematic E7 ({\Delta}{\epsilon} > 0) and MLC-6609 ({\Delta}{\epsilon} < 0) were obtained. Also the contact angle for droplets of isotropic liquid (glycerol), and nematic LCs was measured for the various values of the scanning speed during the laser processing.Comment: 49 pages, 18 figure

    Femtosecond laser written waveguides deep inside silicon

    Get PDF
    Photonic devices that can guide, transfer, or modulate light are highly desired in electronics and integrated silicon (Si) photonics. Here, we demonstrate for the first time, to the best of our knowledge, the creation of optical waveguides deep inside Si using femtosecond pulses at a central wavelength of 1.5 μm. To this end, we use 350 fs long, 2 μJ pulses with a repetition rate of 250 kHz from an Er-doped fiber laser, which we focused inside Si to create permanent modifications of the crystal. The position of the beam is accurately controlled with pump-probe imaging during fabrication. Waveguides that were 5.5 mm in length and 20 μm in diameter were created by scanning the focal position along the beam propagation axis. The fabricated waveguides were characterized with a continuous-wave laser operating at 1.5 μm. The refractive index change inside the waveguide was measured with optical shadowgraphy, yielding a value of 6 × 10−4, and by direct light coupling and far-field imaging, yielding a value of 3.5 × 10−4. The formation mechanism of the modification is discussed. © 2017 Optical Society of America

    Income in Adult Survivors of Childhood Cancer.

    Get PDF
    INTRODUCTION: Little is known about the impact of childhood cancer on the personal income of survivors. We compared income between survivors and siblings, and determined factors associated with income. METHODS: As part of the Swiss Childhood Cancer Survivor Study (SCCSS), a questionnaire was sent to survivors, aged ≥18 years, registered in the Swiss Childhood Cancer Registry (SCCR), diagnosed at age 4'500 CHF), even after we adjusted for socio-demographic and educational factors (OR = 0.46, p<0.001). Older age, male sex, personal and parental education, and number of working hours were associated with high income. Survivors of leukemia (OR = 0.40, p<0.001), lymphoma (OR = 0.63, p = 0.040), CNS tumors (OR = 0.22, p<0.001), bone tumors (OR = 0.24, p = 0.003) had a lower income than siblings. Survivors who had cranial irradiation, had a lower income than survivors who had no cranial irradiation (OR = 0.48, p = 0.006). DISCUSSION: Even after adjusting for socio-demographic characteristics, education and working hours, survivors of various diagnostic groups have lower incomes than siblings. Further research needs to identify the underlying causes
    corecore