203 research outputs found

    THERMOPHOTOVOLTAIC DEVICES AND INFRARED PHOTODETECTORS BASED ON INTERBAND CASCADE STRUCTURES

    Get PDF
    Mid-infrared (IR) optoelectronic devices form the basis for many practical applications such as thermophotovoltaic (TPV) energy conversion, gas sensing, thermal imaging, medical diagnostics, free-space communications, infrared countermeasures and IR illumination. The mid-IR device family based on interband cascade (IC) structures includes IC lasers (ICLs), ICTPV cells and IC infrared photodetectors (ICIPs). These are special types of multistage devices whose operation is made possible by the unique properties of the 6.1 Ã… material system: InAs, GaSb and AlSb, and their related alloys. One of the key properties is the type-II broken-gap alignment between InAs and GaSb. In multistage ICTPV cells and ICIPs, electrons must undergo multiple interband excitations in order to travel between the electrical contacts. This means that the transport of a single electron requires multiple photons, which reverses the situation in ICLs where a single electron can generate multiple photons. Counterintuitively, this transport feature in ICTPV cells and ICIPs is conducive to improving device performance by enhancing the open-circuit voltage in ICTPV cells and suppressing the noise in ICIPs. Furthermore, the collection efficiency of photo-generated carriers in multistage IC devices can be significantly improved by thinning the absorbers in individual stages. Collectively, these advantages make IC structures an attractive choice for narrow bandgap optoelectronic devices, especially for operation at high temperatures. One focus of this dissertation is to outline and demonstrate the advantages provided by IC structures, both in theory and experiment. Another focus of this dissertation is to obtain a better understanding of the physics of IC devices and gain insights into their operation. Theoretical studies of single-absorber and multistage ICTPV cells are presented. The limitations in efficiency are understood by considering several important practical factors. These factors are identified to be closely associated with a short carrier lifetime, high dark saturation current density, small absorption coefficient, and limited diffusion length. The multistage IC architecture is shown to be able to overcome the diffusion length limitation that is responsible for the low quantum efficiency (QE) in single-absorber TPV cells. This ability of the IC architecture offers the opportunity to enhance conversion efficiency by about 10% for wide ranges of aL (product of absorption coefficient and diffusion length) and bandgaps, resulting in a particle conversion efficiency approaching 100%. The illustrated theoretical advantage of multistage IC structures is confirmed experimentally in a comparative study of three fabricated TPV devices, one with a single absorber and two that are multistage IC structures. The bandgap of the InAs/GaSb type-II superlattices (T2SLs) in the three devices is close to 0.2 eV at 300 K. The extracted collection efficiency is considerably higher in multistage IC devices than in the single-absorber device. To further investigate the prospects of IC TPV cells, detailed characterization and performance analyses of two sets of four IC devices with similar bandgaps are performed. The four different configurations enable a comparative study that shows how device performance is affected by material quality variations, as well as by current mismatch between stages and collection efficiency. The carrier lifetime advantage of IC devices over another family of cascade devices, namely quantum cascade (QC) devices, is manifested in the saturation current density (J0). The values of J0 extracted using a semi-empirical model, are more than one order of magnitude lower in IC devices than in QC devices. The significance of J0 on the performances of IR detectors and TPV cells is apparent in a comparison of the measured detectivity (D*) and the estimated open-circuit voltage (Voc). To extract the carrier lifetime in IC devices, a simple and effective electrical method is developed. This method is more generally applicable and considers the parasitic shunt and series resistances found in practical devices. It provides a simple way to extract the carrier lifetime in InAs/GaSb T2SLs in a wide range of operating temperatures. The effect of current mismatch on the performance of ICIPs is investigated using two sets of devices with current-matched and noncurrent-matched configurations. It is shown that current matching is necessary to achieve maximum utilization of absorbed photons for an optimal responsivity. The detectivities of both sets of devices are comparable largely due to the occurrence of a substantial electrical gain in noncurrent-matched ICIPs. The electrical gain is shown to be a ubiquitous property for noncurrent-matched ICIPs through the study of another three devices. To unlock the mechanism underlying electrical gain, a theory is developed for a quantitative description and the calculations are in good agreement with the experimental results

    Not All Countries Celebrate Thanksgiving: On the Cultural Dominance in Large Language Models

    Full text link
    In this paper, we identify a cultural dominance issue within large language models (LLMs) due to the predominant use of English data in model training (e.g. ChatGPT). LLMs often provide inappropriate English-culture-related answers that are not relevant to the expected culture when users ask in non-English languages. To systematically evaluate the cultural dominance issue, we build a benchmark that consists of both concrete (e.g. holidays and songs) and abstract (e.g. values and opinions) cultural objects. Empirical results show that the representative GPT models suffer from the culture dominance problem, where GPT-4 is the most affected while text-davinci-003 suffers the least from this problem. Our study emphasizes the need for critical examination of cultural dominance and ethical consideration in their development and deployment. We show two straightforward methods in model development (i.e. pretraining on more diverse data) and deployment (e.g. culture-aware prompting) can significantly mitigate the cultural dominance issue in LLMs

    Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine

    Full text link
    This report provides a preliminary evaluation of ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness. We adopt the prompts advised by ChatGPT to trigger its translation ability and find that the candidate prompts generally work well with minor performance differences. By evaluating on a number of benchmark test sets, we find that ChatGPT performs competitively with commercial translation products (e.g., Google Translate) on high-resource European languages but lags behind significantly on low-resource or distant languages. As for the translation robustness, ChatGPT does not perform as well as the commercial systems on biomedical abstracts or Reddit comments but exhibits good results on spoken language. Further, we explore an interesting strategy named pivot prompting\mathbf{pivot~prompting} for distant languages, which asks ChatGPT to translate the source sentence into a high-resource pivot language before into the target language, improving the translation performance noticeably. With the launch of the GPT-4 engine, the translation performance of ChatGPT is significantly boosted, becoming comparable to commercial translation products, even for distant languages. Human analysis on Google Translate and ChatGPT suggests that ChatGPT with GPT-3.5 tends to generate more hallucinations and mis-translation errors while that with GPT-4 makes the least errors. In other words, ChatGPT has already become a good translator. Please refer to our Github project for more details: https://github.com/wxjiao/Is-ChatGPT-A-Good-TranslatorComment: Analyzed/compared the outputs between ChatGPT and Google Translate; both automatic and human evaluatio

    GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher

    Full text link
    Safety lies at the core of the development of Large Language Models (LLMs). There is ample work on aligning LLMs with human ethics and preferences, including data filtering in pretraining, supervised fine-tuning, reinforcement learning from human feedback, and red teaming, etc. In this study, we discover that chat in cipher can bypass the safety alignment techniques of LLMs, which are mainly conducted in natural languages. We propose a novel framework CipherChat to systematically examine the generalizability of safety alignment to non-natural languages -- ciphers. CipherChat enables humans to chat with LLMs through cipher prompts topped with system role descriptions and few-shot enciphered demonstrations. We use CipherChat to assess state-of-the-art LLMs, including ChatGPT and GPT-4 for different representative human ciphers across 11 safety domains in both English and Chinese. Experimental results show that certain ciphers succeed almost 100% of the time to bypass the safety alignment of GPT-4 in several safety domains, demonstrating the necessity of developing safety alignment for non-natural languages. Notably, we identify that LLMs seem to have a ''secret cipher'', and propose a novel SelfCipher that uses only role play and several demonstrations in natural language to evoke this capability. SelfCipher surprisingly outperforms existing human ciphers in almost all cases. Our code and data will be released at https://github.com/RobustNLP/CipherChat.Comment: 13 pages, 4 figures, 9 table

    The Earth is Flat? Unveiling Factual Errors in Large Language Models

    Full text link
    Large Language Models (LLMs) like ChatGPT are foundational in various applications due to their extensive knowledge from pre-training and fine-tuning. Despite this, they are prone to generating factual and commonsense errors, raising concerns in critical areas like healthcare, journalism, and education to mislead users. Current methods for evaluating LLMs' veracity are limited by test data leakage or the need for extensive human labor, hindering efficient and accurate error detection. To tackle this problem, we introduce a novel, automatic testing framework, FactChecker, aimed at uncovering factual inaccuracies in LLMs. This framework involves three main steps: First, it constructs a factual knowledge graph by retrieving fact triplets from a large-scale knowledge database. Then, leveraging the knowledge graph, FactChecker employs a rule-based approach to generates three types of questions (Yes-No, Multiple-Choice, and WH questions) that involve single-hop and multi-hop relations, along with correct answers. Lastly, it assesses the LLMs' responses for accuracy using tailored matching strategies for each question type. Our extensive tests on six prominent LLMs, including text-davinci-002, text-davinci-003, ChatGPT~(gpt-3.5-turbo, gpt-4), Vicuna, and LLaMA-2, reveal that FactChecker can trigger factual errors in up to 45\% of questions in these models. Moreover, we demonstrate that FactChecker's test cases can improve LLMs' factual accuracy through in-context learning and fine-tuning (e.g., llama-2-13b-chat's accuracy increase from 35.3\% to 68.5\%). We are making all code, data, and results available for future research endeavors

    A & B == B & A: Triggering Logical Reasoning Failures in Large Language Models

    Full text link
    Recent advancements in large language models (LLMs) have propelled Artificial Intelligence (AI) to new heights, enabling breakthroughs in various tasks such as writing assistance, code generation, and machine translation. A significant distinction of advanced LLMs, such as ChatGPT, is their demonstrated ability to "reason." However, evaluating the reasoning ability of LLMs remains a challenge as most existing evaluations focus on their accuracy on the downstream tasks rather than directly assessing their reasoning processes. Efforts have been made to develop benchmarks and metrics to assess reasoning in LLMs, but they suffer from data leakage or limited scope. In this paper, we introduce LogicAsker, an automatic approach that comprehensively evaluates and improves the logical reasoning abilities of LLMs under a set of atomic reasoning skills based on propositional and predicate logic. The results provide insights into LLMs' reasoning abilities and reveal the logical rules the LLMs did not learn well. We evaluate LogicAsker on six widely deployed LLMs, including GPT-3, ChatGPT, GPT-4, Bard, Vicuna, and Guanaco. The results show that test cases from LogicAsker can find logical reasoning failures in different LLMs with a rate of 25\% - 94\%. In addition, the test cases of LogicAsker can be further used to design demonstration examples for in-context learning, which effectively improves the logical reasoning ability of LLMs, e.g., 10\% for GPT-4. As far as we know, our work is the first to create prompts based on testing results to improve LLMs' formal reasoning ability effectively. All the code, data, and results will be released for reproduction and future research

    Ginsenosides Rg1 from Panax ginseng

    Get PDF
    Acute liver failure (ALF) is a rapidly progressing critical illness with a high mortality rate. Circulating inflammatory cytokines, such as tumor necrosis factor-α (TNF-α), play a significant role in the pathophysiology of ALF through promoting hepatocellular apoptosis. Ginsenoside Rg1, the primary active ingredient in Panax ginseng (also termed Asian or Korean ginseng), has been reported to inhibit TNF-α production and has been shown to significantly attenuate liver fibrosis development. Here, we assessed ginsenoside Rg1’s potential as a therapy for ALF by investigating the effect of ginsenoside Rg1 treatment on circulating inflammatory markers, hepatocellular apoptosis, and relevant apoptotic signaling pathways in a well-established murine ALF model. We found that ginsenoside Rg1 significantly reduces liver damage in a murine ALF model through inhibiting TNF-α-induced, caspase-dependent hepatocellular apoptosis. These results support the further investigation of ginsenoside Rg1 as a therapeutic candidate for ALF

    Who is ChatGPT? Benchmarking LLMs' Psychological Portrayal Using PsychoBench

    Full text link
    Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse user requests. This narrows the distinction between human beings and artificial intelligence agents, raising intriguing questions regarding the potential manifestation of personalities, temperaments, and emotions within LLMs. In this paper, we propose a framework, PsychoBench, for evaluating diverse psychological aspects of LLMs. Comprising thirteen scales commonly used in clinical psychology, PsychoBench further classifies these scales into four distinct categories: personality traits, interpersonal relationships, motivational tests, and emotional abilities. Our study examines five popular models, namely text-davinci-003, gpt-3.5-turbo, gpt-4, LLaMA-2-7b, and LLaMA-2-13b. Additionally, we employ a jailbreak approach to bypass the safety alignment protocols and test the intrinsic natures of LLMs. We have made PsychoBench openly accessible via https://github.com/CUHK-ARISE/PsychoBench.Comment: Accepted for ICLR 2024 Oral Presentation. 15 pages (main text) and 5 pages (appendix

    Based on disulfidptosis-related glycolytic genes to construct a signature for predicting prognosis and immune infiltration analysis of hepatocellular carcinoma

    Get PDF
    BackgroundHepatocellular carcinoma (HCC) comprises several distinct molecular subtypes with varying prognostic implications. However, a comprehensive analysis of a prognostic signature for HCC based on molecular subtypes related to disulfidptosis and glycolysis, as well as associated metabolomics and the immune microenvironment, is yet to be fully explored.MethodsBased on the differences in the expression of disulfide-related glycolytic genes (DRGGs), patients with HCC were divided into different subtypes by consensus clustering. Establish and verify a risk prognosis signature. Finally, the expression level of the key gene SLCO1B1 in the signature was evaluated using immunohistochemistry (IHC) and quantitative real-time PCR (qRT-PCR) in HCC. The association between this gene and immune cells was explored using multiplex immunofluorescence. The biological functions of the cell counting kit-8, wound healing, and colony formation assays were studied.ResultsDifferent subtypes of patients have specific clinicopathological features, prognosis and immune microenvironment. We identified seven valuable genes and constructed a risk-prognosis signature. Analysis of the risk score revealed that compared to the high-risk group, the low-risk group had a better prognosis, higher immune scores, and more abundant immune-related pathways, consistent with the tumor subtypes. Furthermore, IHC and qRT-PCR analyses showed decreased expression of SLCO1B1 in HCC tissues. Functional experiments revealed that SLCO1B1 overexpression inhibited the proliferation, migration, and invasion of HCC cells.ConclusionWe developed a prognostic signature that can assist clinicians in predicting the overall survival of patients with HCC and provides a reference value for targeted therapy

    LdsConv : learned depthwise separable convolutions by group pruning

    Get PDF
    Standard convolutional filters usually capture unnecessary overlap of features resulting in a waste of computational cost. In this paper, we aim to solve this problem by proposing a novel Learned Depthwise Separable Convolution (LdsConv) operation that is smart but has a strong capacity for learning. It integrates the pruning technique into the design of convolutional filters, formulated as a generic convolutional unit that can be used as a direct replacement of convolutions without any adjustments of the architecture. To show the effectiveness of the proposed method, experiments are carried out using the state-of-the-art convolutional neural networks (CNNs), including ResNet, DenseNet, SE-ResNet and MobileNet, respectively. The results show that by simply replacing the original convolution with LdsConv in these CNNs, it can achieve a significantly improved accuracy while reducing computational cost. For the case of ResNet50, the FLOPs can be reduced by 40.9%, meanwhile the accuracy on the associated ImageNet increases
    • …
    corecore