93 research outputs found

    Implementing BERT and fine-tuned RobertA to detect AI generated news by ChatGPT

    Full text link
    The abundance of information on social media has increased the necessity of accurate real-time rumour detection. Manual techniques of identifying and verifying fake news generated by AI tools are impracticable and time-consuming given the enormous volume of information generated every day. This has sparked an increase in interest in creating automated systems to find fake news on the Internet. The studies in this research demonstrate that the BERT and RobertA models with fine-tuning had the best success in detecting AI generated news. With a score of 98%, tweaked RobertA in particular showed excellent precision. In conclusion, this study has shown that neural networks can be used to identify bogus news AI generation news created by ChatGPT. The RobertA and BERT models' excellent performance indicates that these models can play a critical role in the fight against misinformation

    ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases

    Full text link
    Large Language Models (LLMs) have shown the potential to revolutionize natural language processing tasks in various domains, sparking great interest in vertical-specific large models. However, unlike proprietary models such as BloombergGPT and FinGPT, which have leveraged their unique data accumulations to make strides in the finance domain, there hasn't not many similar large language models in the Chinese legal domain to facilitate its digital transformation. In this paper, we propose an open-source legal large language model named ChatLaw. Due to the importance of data quality, we carefully designed a legal domain fine-tuning dataset. Additionally, to overcome the problem of model hallucinations in legal data screening during reference data retrieval, we introduce a method that combines vector database retrieval with keyword retrieval to effectively reduce the inaccuracy of relying solely on vector database retrieval. Furthermore, we propose a self-attention method to enhance the ability of large models to overcome errors present in reference data, further optimizing the issue of model hallucinations at the model level and improving the problem-solving capabilities of large models. We also open-sourced our model and part of the data at https://github.com/PKU-YuanGroup/ChatLaw

    ChatFace: Chat-Guided Real Face Editing via Diffusion Latent Space Manipulation

    Full text link
    Editing real facial images is a crucial task in computer vision with significant demand in various real-world applications. While GAN-based methods have showed potential in manipulating images especially when combined with CLIP, these methods are limited in their ability to reconstruct real images due to challenging GAN inversion capability. Despite the successful image reconstruction achieved by diffusion-based methods, there are still challenges in effectively manipulating fine-gained facial attributes with textual instructions.To address these issues and facilitate convenient manipulation of real facial images, we propose a novel approach that conduct text-driven image editing in the semantic latent space of diffusion model. By aligning the temporal feature of the diffusion model with the semantic condition at generative process, we introduce a stable manipulation strategy, which perform precise zero-shot manipulation effectively. Furthermore, we develop an interactive system named ChatFace, which combines the zero-shot reasoning ability of large language models to perform efficient manipulations in diffusion semantic latent space. This system enables users to perform complex multi-attribute manipulations through dialogue, opening up new possibilities for interactive image editing. Extensive experiments confirmed that our approach outperforms previous methods and enables precise editing of real facial images, making it a promising candidate for real-world applications. Project page: https://dongxuyue.github.io/chatface

    Printing surface charge as a new paradigm to program droplet transport

    Full text link
    Directed, long-range and self-propelled transport of droplets on solid surfaces, especially on water repellent surfaces, is crucial for many applications from water harvesting to bio-analytical devices. One appealing strategy to achieve the preferential transport is to passively control the surface wetting gradients, topological or chemical, to break the asymmetric contact line and overcome the resistance force. Despite extensive progress, the directional droplet transport is limited to small transport velocity and short transport distance due to the fundamental trade-off: rapid transport of droplet demands a large wetting gradient, whereas long-range transport necessitates a relatively small wetting gradient. Here, we report a radically new strategy that resolves the bottleneck through the creation of an unexplored gradient in surface charge density (SCD). By leveraging on a facile droplet printing on superamphiphobic surfaces as well as the fundamental understanding of the mechanisms underpinning the creation of the preferential SCD, we demonstrate the self-propulsion of droplets with a record-high velocity over an ultra-long distance without the need for additional energy input. Such a Leidenfrost-like droplet transport, manifested at ambient condition, is also genetic, which can occur on a variety of substrates such as flexible and vertically placed surfaces. Moreover, distinct from conventional physical and chemical gradients, the new dimension of gradient in SCD can be programmed in a rewritable fashion. We envision that our work enriches and extends our capability in the manipulation of droplet transport and would find numerous potential applications otherwise impossible.Comment: 11 pages, 4 figure

    DeepC2: AI-powered Covert Botnet Command and Control on OSNs

    Full text link
    Botnets are one of the major threats to computer security. In previous botnet command and control (C&C) scenarios using online social networks (OSNs), methods for addressing (e.g., IDs, links, or DGAs) are hardcoded into bots. Once a bot is reverse engineered, the botmaster and C&C infrastructure will be exposed. Additionally, abnormal content from explicit commands may expose botmasters and raise anomalies on OSNs. To overcome these deficiencies, we proposed DeepC2, an AI-powered covert C&C method on OSNs. By leveraging neural networks, bots can find botmasters by avatars, which are converted into feature vectors and embedded into bots. Adversaries cannot infer botmasters' accounts from the vectors. Commands are embedded into normal contents (e.g., tweets and comments) using text data augmentation and hash collision. Experiments on Twitter show that command-embedded contents can be generated efficiently, and bots can find botmasters and obtain commands accurately. Security analysis on different scenarios show that DeepC2 is robust and hard to be shut down. By demonstrating how AI may help promote covert communication on OSNs, this work provides a new perspective on botnet detection and confrontation.Comment: 13 pages, 15 figures, 7 tables. Discussion on possible countermeasures update

    Video-Bench: A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models

    Full text link
    Video-based large language models (Video-LLMs) have been recently introduced, targeting both fundamental improvements in perception and comprehension, and a diverse range of user inquiries. In pursuit of the ultimate goal of achieving artificial general intelligence, a truly intelligent Video-LLM model should not only see and understand the surroundings, but also possess human-level commonsense, and make well-informed decisions for the users. To guide the development of such a model, the establishment of a robust and comprehensive evaluation system becomes crucial. To this end, this paper proposes \textit{Video-Bench}, a new comprehensive benchmark along with a toolkit specifically designed for evaluating Video-LLMs. The benchmark comprises 10 meticulously crafted tasks, evaluating the capabilities of Video-LLMs across three distinct levels: Video-exclusive Understanding, Prior Knowledge-based Question-Answering, and Comprehension and Decision-making. In addition, we introduce an automatic toolkit tailored to process model outputs for various tasks, facilitating the calculation of metrics and generating convenient final scores. We evaluate 8 representative Video-LLMs using \textit{Video-Bench}. The findings reveal that current Video-LLMs still fall considerably short of achieving human-like comprehension and analysis of real-world videos, offering valuable insights for future research directions. The benchmark and toolkit are available at: \url{https://github.com/PKU-YuanGroup/Video-Bench}.Comment: Benchmark is available at https://github.com/PKU-YuanGroup/Video-Benc

    Bioinspired Underwater Bonding and Debonding on Demand

    Get PDF
    Mussel glue: Bioinspired underwater chemical bonding with the possibility of phototriggered debonding is reported. A four-arm star-poly(ethyleneglycol) end-functionalized by nitrodopamine was synthesized. The nitrodopamine offered the reactivity of catechol and the chemistry of the photocleavable o-nitrophenyl ethyl group (see picture)