316 research outputs found

    The maximum sum of sizes of non-empty cross tt-intersecting families

    Full text link
    Let [n]:={1,2,,n}[n]:=\lbrace 1,2,\ldots,n \rbrace, and MM be a set of positive integers. Denote the family of all subsets of [n][n] with sizes in MM by ([n]M)\binom{\left[n\right]}{M}. The non-empty families A([n]R)\mathcal{A}\subseteq\binom{\left[n\right]}{R} and B([n]S)\mathcal{B}\subseteq \binom{\left[n\right]}{S} are said to be cross tt-intersecting if ABt|A\cap B|\geq t for all AAA\in \mathcal{A} and BBB\in \mathcal{B}. In this paper, we determine the maximum sum of sizes of non-empty cross tt-intersecting families, and characterize the extremal families. Similar result for finite vector spaces is also proved

    Research on Target Detection Algorithm of Radar and Visible Image Fusion Based on Wavelet Transform

    Get PDF
    The target detection rate of unmanned surface vehicle is low because of waves, fog, background clutter and other environmental factors on the interference. Therefore, the paper studies the target detection algorithm of radar and visible image fusion based on wavelet transform. The visible image is preprocessed to ensure the detection effect. The multi-scale fractal model is used to extract the target features, and the difference between the fractal features of the target and the background is used to detect the target. The radar image is denoised by a combination of median filtering and wavelet transform. The processed visible light and radar image are fused with wavelet transform strategy. The coefficients of the low frequency sub-band are processed by the average fusion strategy. The coefficients of the high frequency sub-band are processed using a strategy with a higher absolute value. The standard deviation, the spatial frequency and the contrast resolution of the image fusion result are compared. The simulation results show that the processed image is better than the unprocessed image after the fusion

    Evaluation of ChatGPT as a Question Answering System for Answering Complex Questions

    Full text link
    ChatGPT is a powerful large language model (LLM) that has made remarkable progress in natural language understanding. Nevertheless, the performance and limitations of the model still need to be extensively evaluated. As ChatGPT covers resources such as Wikipedia and supports natural language question answering, it has garnered attention as a potential replacement for traditional knowledge based question answering (KBQA) models. Complex question answering is a challenge task of KBQA, which comprehensively tests the ability of models in semantic parsing and reasoning. To assess the performance of ChatGPT as a question answering system (QAS) using its own knowledge, we present a framework that evaluates its ability to answer complex questions. Our approach involves categorizing the potential features of complex questions and describing each test question with multiple labels to identify combinatorial reasoning. Following the black-box testing specifications of CheckList proposed by Ribeiro et.al, we develop an evaluation method to measure the functionality and reliability of ChatGPT in reasoning for answering complex questions. We use the proposed framework to evaluate the performance of ChatGPT in question answering on 8 real-world KB-based CQA datasets, including 6 English and 2 multilingual datasets, with a total of approximately 190,000 test cases. We compare the evaluation results of ChatGPT, GPT-3.5, GPT-3, and FLAN-T5 to identify common long-term problems in LLMs. The dataset and code are available at https://github.com/tan92hl/Complex-Question-Answering-Evaluation-of-ChatGPT

    Strong, conductive carbon nanotube fibers as efficient hole collectors

    Get PDF
    We present the photovoltaic properties of heterojunctions made from single-walled carbon nanotube (SWNT) fibers and n-type silicon wafers. The use of the opaque SWNT fiber allows photo-generated holes to transport along the axis direction of the fiber. The heterojunction solar cells show conversion efficiencies of up to 3.1% (actual) and 10.6% (nominal) at AM1.5 condition. In addition, the use of strong, environmentally benign carbon nanotube fibers provides excellent structural stability of the photovoltaic devices

    Choroidal thickness and vascular microstructure parameters in Chinese school-age children with high hyperopia using optical coherence tomography

    Get PDF
    BackgroundThe current study was to evaluate the choroidal thickness (CT) and vascular microstructure parameters in Chinese children with high hyperopia through enhanced depth imaging optical coherence tomography (EDI-OCT).MethodsCross-sectional study. A total of 23 children with high hyperopia and 29 children with normal refractive status were retrospectively enrolled in the study. The measurement of the macular CT, 7 points: the sub-foveal area point, the temporal and nasal points at a radius of 0.5-mm, 1.5-mm, and 3-mm were measured. After binarization of the OCT images, the total choroidal area (TCA), stromal area (SA) as well as the luminal area (LA) were identified and measured. The choroidal vascularity index (CVI) was defined as the ratio of LA to TCA. The independent t-test for normal distributions and Kruskal-Wallis tests for non-normal distributions were used to compare other parameters between groups. The Tamhane's T2 test was performed to adjust for multiple comparisons between groups within each analysis.ResultsThe subfoveal CT (SFCT) in the high hypermetropic group was significantly thicker than that in normal controls (309.22 ± 53.14 μm vs. 291.27 ± 38.27 μm; P = 0.019). At 0.5 mm, 1.5 mm, and 3.0 mm in diameter, the nasal choroidal sectors of the high hyperopia eyes were significantly thicker than that of the control (P < 0.05). There was significant difference in the choroidal vascular parameters. TCA and LA in the high hyperopia eyes was significantly larger than that of the normal control eyes (3078129.54 ± 448271.18 μm2 vs. 2765218.17 ± 317827.19 μm2, 1926819.54 ± 229817.56 μm2 vs. 1748817.18 ± 191827.98 μm2; P = 0.009, P = 0.011; Table 2). SA values were 1086287.55 ± 212712.11 um2 in the high hyperopia eyes and 999712.71 ± 209838.12 μm2 in the control eyes. The CVI and LA/SA ratio values were differed significantly in the two groups (P = 0.019, P = 0.030, respectively). AL was significantly correlated with SFCT (r = −0.325, P = 0.047), but not significantly correlated with other parameters. Spherical equivalent (SE) was significantly correlated with AL and SFCT (r = −0.711, r = 0.311; P = 0.001, P = 0.016), whereas no significant association between sphere and other parameters.ConclusionThe choroidal structure of the high hyperopia eyes was different from the normal control eyes. The thicker SFCT, higher LA, and TCA were characteristic of high hyperopia eyes. Choroidal blood flow may be decreased in amblyopic eyes. SFCT of high hyperopia children abnormally increased and correlated with shorter AL and higher SE. AL and SE affect choroidal structure and vascular density

    HGT: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding

    Full text link
    Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures.To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks.It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives.We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks

    Tackling unemployment in China: state capacity and governance issues

    Get PDF
    This paper considers China's state capacity and changing governance as revealed through its policies to tackle unemployment. Despite high levels of growth, economic restructuring has resulted in rising unemployment over the last decade. The Chinese state has been able to manage job losses from state enterprises, demonstrating some state capacity in relation to this sector and some persistent command economy governance mechanisms. However both design and implementation of policies to compensate and assist particular groups among the unemployed have been shaped by weak state capacity in several other areas. First, capacity to gather accurate employment data is limited, meaning local and central governments do not have a good understanding of the extent and nature of unemployment. Second, the sustainability of supposedly mandatory unemployment insurance schemes is threatened by poor capacity to enforce participation. Third, poor central state capacity to ensure local governments implement policies effectively leads to poor unemployment insurance fund capacity, resulting in provision for only a narrow segment of the unemployed and low quality employment services. Although the adoption of unemployment insurance (and its extension to employers and employees in the private sector), the introduction of a Labour Contract Law in 2007, and the delivery of employment services by private businesses indicate a shift towards the use of new governance mechanisms based on entitlement, contract and private sector delivery of public-sector goods, that shift is undermined by poor state capacity in relation to some of these new mechanisms

    MATEval: A Multi-Agent Discussion Framework for Advancing Open-Ended Text Evaluation

    Full text link
    Recent advancements in generative Large Language Models(LLMs) have been remarkable, however, the quality of the text generated by these models often reveals persistent issues. Evaluating the quality of text generated by these models, especially in open-ended text, has consistently presented a significant challenge. Addressing this, recent work has explored the possibility of using LLMs as evaluators. While using a single LLM as an evaluation agent shows potential, it is filled with significant uncertainty and instability. To address these issues, we propose the MATEval: A "Multi-Agent Text Evaluation framework" where all agents are played by LLMs like GPT-4. The MATEval framework emulates human collaborative discussion methods, integrating multiple agents' interactions to evaluate open-ended text. Our framework incorporates self-reflection and Chain-of-Thought (CoT) strategies, along with feedback mechanisms, enhancing the depth and breadth of the evaluation process and guiding discussions towards consensus, while the framework generates comprehensive evaluation reports, including error localization, error types and scoring. Experimental results show that our framework outperforms existing open-ended text evaluation methods and achieves the highest correlation with human evaluation, which confirms the effectiveness and advancement of our framework in addressing the uncertainties and instabilities in evaluating LLMs-generated text. Furthermore, our framework significantly improves the efficiency of text evaluation and model iteration in industrial scenarios.Comment: This paper has been accepted as a long paper presentation by DASFAA 2024 Industrial Trac
    corecore