56 research outputs found

    Quantifying discrepancies in opinion spectra from online and offline networks

    Full text link
    Online social media such as Twitter are widely used for mining public opinions and sentiments on various issues and topics. The sheer volume of the data generated and the eager adoption by the online-savvy public are helping to raise the profile of online media as a convenient source of news and public opinions on social and political issues as well. Due to the uncontrollable biases in the population who heavily use the media, however, it is often difficult to measure how accurately the online sphere reflects the offline world at large, undermining the usefulness of online media. One way of identifying and overcoming the online-offline discrepancies is to apply a common analytical and modeling framework to comparable data sets from online and offline sources and cross-analyzing the patterns found therein. In this paper we study the political spectra constructed from Twitter and from legislators' voting records as an example to demonstrate the potential limits of online media as the source for accurate public opinion mining.Comment: 10 pages, 4 figure

    Branching process approach for Boolean bipartite networks of metabolic reactions

    Full text link
    The branching process (BP) approach has been successful in explaining the avalanche dynamics in complex networks. However, its applications are mainly focused on unipartite networks, in which all nodes are of the same type. Here, motivated by a need to understand avalanche dynamics in metabolic networks, we extend the BP approach to a particular bipartite network composed of Boolean AND and OR logic gates. We reduce the bipartite network into a unipartite network by integrating out OR gates, and obtain the effective branching ratio for the remaining AND gates. Then the standard BP approach is applied to the reduced network, and the avalanche size distribution is obtained. We test the BP results with simulations on the model networks and two microbial metabolic networks, demonstrating the usefulness of the BP approach

    Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming

    Full text link
    Recent works on neural network pruning advocate that reducing the depth of the network is more effective in reducing run-time memory usage and accelerating inference latency than reducing the width of the network through channel pruning. In this regard, some recent works propose depth compression algorithms that merge convolution layers. However, the existing algorithms have a constricted search space and rely on human-engineered heuristics. In this paper, we propose a novel depth compression algorithm which targets general convolution operations. We propose a subset selection problem that replaces inefficient activation layers with identity functions and optimally merges consecutive convolution operations into shallow equivalent convolution operations for efficient end-to-end inference latency. Since the proposed subset selection problem is NP-hard, we formulate a surrogate optimization problem that can be solved exactly via two-stage dynamic programming within a few seconds. We evaluate our methods and baselines by TensorRT for a fair inference latency comparison. Our method outperforms the baseline method with higher accuracy and faster inference speed in MobileNetV2 on the ImageNet dataset. Specifically, we achieve 1.41×1.41\times speed-up with 0.110.11\%p accuracy gain in MobileNetV2-1.0 on the ImageNet.Comment: ICML 2023; Codes at https://github.com/snu-mllab/Efficient-CNN-Depth-Compressio

    Query-Efficient Black-Box Red Teaming via Bayesian Optimization

    Full text link
    The deployment of large-scale generative models is often restricted by their potential risk of causing harm to users in unpredictable ways. We focus on the problem of black-box red teaming, where a red team generates test cases and interacts with the victim model to discover a diverse set of failures with limited query access. Existing red teaming methods construct test cases based on human supervision or language model (LM) and query all test cases in a brute-force manner without incorporating any information from past evaluations, resulting in a prohibitively large number of queries. To this end, we propose Bayesian red teaming (BRT), novel query-efficient black-box red teaming methods based on Bayesian optimization, which iteratively identify diverse positive test cases leading to model failures by utilizing the pre-defined user input pool and the past evaluations. Experimental results on various user input pools demonstrate that our method consistently finds a significantly larger number of diverse positive test cases under the limited query budget than the baseline methods. The source code is available at https://github.com/snu-mllab/Bayesian-Red-Teaming.Comment: ACL 2023 Long Paper - Main Conferenc
    • …
    corecore