6,445 research outputs found

    CHANGES IN INCOME AND WELFARE DISTRIBUTION IN URBAN CHINA AND IMPLICATIONS FOR FOOD CONSUMPTION AND TRADE

    Get PDF
    While China's economic reform has brought about significant economic growth, there is a considerable debate about the impact of such market-oriented reform on income and welfare distributions. This paper examines the changes in income and welfare distributions in urban China from 1981 to 1998 and discusses implications for China's food consumption patterns and trade behavior. While the Lorenz curves estimated using Kakwani's interpolation method indicate that the level of income inequality in urban China has increased significantly since 1981, welfare comparisons based on generalized Lorenz curves suggest that the rise in real average income has more than compensated for the increase in inequality and has therefore brought about continuous improvement in welfare since 1981, except in 1988 and 1989 due to high inflation rates. Nevertheless, it becomes very critical for China to develop welfare programs and a social security system to provide a guaranteed living standard for low-income households. China's increasing income will continue to shift its food consumption from grains to animal products and, at the same time, the increasing income inequality will make food demand significantly different across regions and income groups.Community/Rural/Urban Development, Food Consumption/Nutrition/Food Safety, International Relations/Trade,

    Photo-irradiation of Base Forms of Polyaniline with Photo Acid Generators to Form Increased Conductivity Composites

    Get PDF
    A method of forming electrically conductive polyaniline (PANI)-based composites includes mixing a base form of PANI, a photo acid generator (PAG), and when the PAG does not hydrogen bond to the base form of PANI, an additive which can form hydrogen bonds with the base form of PANI or PAG, together with at least one solvent to form a mixture. The solvent is removed from the mixture. After the removing, the mixture is photo-irradiated with a wavelength within an absorption band of the PAG for converting the base form of PANI to a salt form of PANI to form a polymer composite that includes the salt form of PANI. The polymer composite has a 25 degree C electrical conductivity that is at least 3 orders of magnitude higher than a 25 degree C electrical conductivity base form of PANI. such as a 25 degree C electrical conductivity of \u3e or = to 0.01 S/cm

    On the Nature of X(4260)

    Full text link
    We study the property of X(4260)X(4260) resonance by re-analyzing all experimental data available, especially the e+eJ/ψπ+π,ωχc0e^+e^- \rightarrow J/\psi\,\pi^+\pi^-,\,\,\,\omega\chi_{c0} cross section data. The final state interactions of the ππ\pi\pi, KKˉK\bar K couple channel system are also taken into account. A sizable coupling between the X(4260)X(4260) and ωχc0\omega\chi_{c0} is found. The inclusion of the ωχc0\omega\chi_{c0} data indicates a small value of Γe+e=23.30±3.55\Gamma_{e^+e^-}=23.30\pm 3.55eV.Comment: Refined analysis with new experimental data included. 13 page

    Agents meet OKR: An Object and Key Results Driven Agent System with Hierarchical Self-Collaboration and Self-Evaluation

    Full text link
    In this study, we introduce the concept of OKR-Agent designed to enhance the capabilities of Large Language Models (LLMs) in task-solving. Our approach utilizes both self-collaboration and self-correction mechanism, facilitated by hierarchical agents, to address the inherent complexities in task-solving. Our key observations are two-fold: first, effective task-solving demands in-depth domain knowledge and intricate reasoning, for which deploying specialized agents for individual sub-tasks can markedly enhance LLM performance. Second, task-solving intrinsically adheres to a hierarchical execution structure, comprising both high-level strategic planning and detailed task execution. Towards this end, our OKR-Agent paradigm aligns closely with this hierarchical structure, promising enhanced efficacy and adaptability across a range of scenarios. Specifically, our framework includes two novel modules: hierarchical Objects and Key Results generation and multi-level evaluation, each contributing to more efficient and robust task-solving. In practical, hierarchical OKR generation decomposes Objects into multiple sub-Objects and assigns new agents based on key results and agent responsibilities. These agents subsequently elaborate on their designated tasks and may further decompose them as necessary. Such generation operates recursively and hierarchically, culminating in a comprehensive set of detailed solutions. The multi-level evaluation module of OKR-Agent refines solution by leveraging feedback from all associated agents, optimizing each step of the process. This ensures solution is accurate, practical, and effectively address intricate task requirements, enhancing the overall reliability and quality of the outcome. Experimental results also show our method outperforms the previous methods on several tasks. Code and demo are available at https://okr-agent.github.io

    When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

    Full text link
    Incremental learning aims to overcome catastrophic forgetting when learning deep networks from sequential tasks. With impressive learning efficiency and performance, prompt-based methods adopt a fixed backbone to sequential tasks by learning task-specific prompts. However, existing prompt-based methods heavily rely on strong pretraining (typically trained on ImageNet-21k), and we find that their models could be trapped if the potential gap between the pretraining task and unknown future tasks is large. In this work, we develop a learnable Adaptive Prompt Generator (APG). The key is to unify the prompt retrieval and prompt learning processes into a learnable prompt generator. Hence, the whole prompting process can be optimized to reduce the negative effects of the gap between tasks effectively. To make our APG avoid learning ineffective knowledge, we maintain a knowledge pool to regularize APG with the feature distribution of each class. Extensive experiments show that our method significantly outperforms advanced methods in exemplar-free incremental learning without (strong) pretraining. Besides, under strong retraining, our method also has comparable performance to existing prompt-based models, showing that our method can still benefit from pretraining. Codes can be found at https://github.com/TOM-tym/APGComment: Accepted to ICCV 202
    corecore