183 research outputs found

    The Effects of Online Advertisements and News Images on News Reception

    Get PDF
    The news and advertising industries have a symbiotic relationship. News media bring audiences to advertisers, and advertisers provide the necessary funding for the survival of the news media. This inseparable relationship between the news and advertising industries continues to exist in the era of the Internet when various newly developed techniques are used to attract online newsreaders\u27 attention. This raises the questions of whether exposure to online news and advertisements simultaneously has a negative impact on acquiring information from the news and whether the negative impact, if there is any, can be mitigated by motivating newsreaders to engage in news reading through including news images that attract newsreaders\u27 attention. To answer theses question, an online experiment was conducted. It had a 3 (Online Advertisements: None vs. Static Banners vs. Animated Banners) X 2 (News Images: None vs. Human Suffering) between-subject design. The findings indicate that online advertisements may reduce readers\u27 attention to news. Moreover, they suggest that news images depicting human suffering may mitigate the negative effect of online advertisements on news processing under some circumstances. Simultaneously processing news images and online advertisements may also cause cognitive overload that suppresses news processing. This implies that including news images increases knowledge acquisition only to the extent that newsreaders have enough resources available to process the information from news. From practical perspectives, the findings shed light on what news reporters and editors may consider when designing online news websites

    Ruthenium-Poly(Vinyl Pyridine) (RuPVP) Metallopolymers for Catalyzing Self-Oscillating Gels

    Get PDF
    Stimuli responsive polymer gels, in which a single stimulus (e.g. temperature, pH, etc.) causes a change in volume, have been the subject of intense interest for applications such as drug delivery and biological sensors. In these gels a periodic external change in stimulus is required for a periodic oscillation to be observed within a gel. However, many biological systems maintain periodic oscillations under constant environmental conditions, transforming chemical energy into mechanical work. Materials capable of mimicking this biological behavior represent exciting opportunities for extending responsive behavior through chemical energy harvesting and autonomous function. Autonomous oscillations can be achieved by the oscillating Belousov-Zhabotinsky (BZ) reaction within gels containing the BZ catalyst. When a gel containing a catalyst metal, such as ruthenium, is placed in a solution containing the BZ reactants (minus the Ru), the catalyst within the gel undergoes oscillation in its redox state. Due to the difference in the hydrophilicity of the polymer network at the Ru2+ and Ru3+ states, the gel displays swell-deswell oscillations. One of the challenges in producing self-oscillating gels is the lack of options for BZ catalysts. Currently used catalysts are either cost prohibitive or overly difficult to synthesize. To alleviate this problem, a facile, relatively inexpensive synthesis of ruthenium catalyst complex was attempted following previously reported procedures in the coordination polymer literature. Using readily available precursors, cis-Dichlorobis(2,2’-bipyridine)ruthenium(II) and poly(4-vinylpyridine) a ruthenium-poly(vinylpyridine) (RuPVP) metallopolymer was prepared. BZ reactions were successfully triggered by this ruthenium catalyst. With the catalytic ability of RuPVP established, this research can proceed with grafting the catalyst into a polymer gel, creating a versatile and accessible self-oscillating gel

    Baseline CNN structure analysis for facial expression recognition

    Full text link
    We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks x 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.Comment: 6 pages, RO-MAN2016 Conferenc

    AdaFace: Quality Adaptive Margin for Face Recognition

    Full text link
    Recognition in low quality face datasets is challenging because facial attributes are obscured and degraded. Advances in margin-based loss functions have resulted in enhanced discriminability of faces in the embedding space. Further, previous studies have studied the effect of adaptive losses to assign more importance to misclassified (hard) examples. In this work, we introduce another aspect of adaptiveness in the loss function, namely the image quality. We argue that the strategy to emphasize misclassified samples should be adjusted according to their image quality. Specifically, the relative importance of easy or hard samples should be based on the sample's image quality. We propose a new loss function that emphasizes samples of different difficulties based on their image quality. Our method achieves this in the form of an adaptive margin function by approximating the image quality with feature norms. Extensive experiments show that our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets (IJB-B, IJB-C, IJB-S and TinyFace). Code and models are released in https://github.com/mk-minchul/AdaFace.Comment: to be published in CVPR2022 (Oral

    Cluster and Aggregate: Face Recognition with Large Probe Set

    Full text link
    Feature fusion plays a crucial role in unconstrained face recognition where inputs (probes) comprise of a set of NN low quality images whose individual qualities vary. Advances in attention and recurrent modules have led to feature fusion that can model the relationship among the images in the input set. However, attention mechanisms cannot scale to large NN due to their quadratic complexity and recurrent modules suffer from input order sensitivity. We propose a two-stage feature fusion paradigm, Cluster and Aggregate, that can both scale to large NN and maintain the ability to perform sequential inference with order invariance. Specifically, Cluster stage is a linear assignment of NN inputs to MM global cluster centers, and Aggregation stage is a fusion over MM clustered features. The clustered features play an integral role when the inputs are sequential as they can serve as a summarization of past features. By leveraging the order-invariance of incremental averaging operation, we design an update rule that achieves batch-order invariance, which guarantees that the contributions of early image in the sequence do not diminish as time steps increase. Experiments on IJB-B and IJB-S benchmark datasets show the superiority of the proposed two-stage paradigm in unconstrained face recognition. Code and pretrained models are available in https://github.com/mk-minchul/cafaceComment: To appear in NeurIPS 202
    • …
    corecore