5 research outputs found

    Scalable colored sub-ambient radiative coolers based on a polymer-Tamm photonic structure

    Full text link
    Daytime radiative coolers cool objects below the air temperature without any electricity input, while most of them are limited by a silvery or whitish appearance. Colored daytime radiative coolers (CDRCs) with diverse colors, scalable manufacture, and sub-ambient cooling have not been achieved. We introduce a polymer-Tamm photonic structure to enable a high infrared emittance and an engineered absorbed solar irradiance, governed by the quality factor (Q-factor). We theoretically determine the theoretical thresholds for sub-ambient cooling through yellow, magenta, and cyan CDRCs. We experimentally fabricate and observe a temperature drop of 2.6-8.8 degrees Celsius on average during daytime and 4.0-4.4degrees Celsius during nighttime. Furthermore, we demonstrate a scalable-manufactured magenta CDRC with a width of 60 cm and a length of 500 cm by a roll-to-roll deposition technique. This work provides guidelines for large-scale CDRCs and offers unprecedented opportunities for potential applications with energy-saving, aesthetic, and visual comfort demands

    Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond

    Full text link
    Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge engineering and artificial intelligence. Recently, Large Language Models (LLMs) have emerged as a noteworthy innovation in natural language processing (NLP), exhibiting impressive achievements across various classic NLP tasks. However, the question of whether LLMs can effectively address the task of logical reasoning, which requires gradual cognitive inference similar to human intelligence, remains unanswered. To this end, we aim to bridge this gap and provide comprehensive evaluations in this paper. Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings. Considering the comprehensiveness of evaluations, we include three representative LLMs (i.e., text-davinci-003, ChatGPT and BARD) and evaluate them on all selected datasets under zero-shot, one-shot and three-shot settings. Secondly, different from previous evaluations relying only on simple metrics (e.g., accuracy), we propose fine-level evaluations from objective and subjective manners, covering both answers and explanations. Additionally, to uncover the logical flaws of LLMs, problematic cases will be attributed to five error types from two dimensions, i.e., evidence selection process and reasoning process. Thirdly, to avoid the influences of knowledge bias and purely focus on benchmarking the logical reasoning capability of LLMs, we propose a new dataset with neutral content. It contains 3,000 samples and covers deductive, inductive and abductive settings. Based on the in-depth evaluations, this paper finally forms a general evaluation scheme of logical reasoning capability from six dimensions. It reflects the pros and cons of LLMs and gives guiding directions for future works.Comment: 14 pages, 11 figure
    corecore