88 research outputs found

    Robust Event-Triggered Energy-to-Peak Filtering for Polytopic Uncertain Systems over Lossy Network with Quantized Measurements

    Get PDF
    The event-triggered energy-to-peak filtering for polytopic discrete-time linear systems is studied with the consideration of lossy network and quantization error. Because of the communication imperfections from the packet dropout of lossy link, the event-triggered condition used to determine the data release instant at the event generator (EG) can not be directly applied to update the filter input at the zero order holder (ZOH) when performing filter performance analysis and synthesis. In order to balance such nonuniform time series between the triggered instant of EG and the updated instant of ZOH, two event-triggered conditions are defined, respectively, whereafter a worst-case bound on the number of consecutive packet losses of the transmitted data from EG is given, which marginally guarantees the effectiveness of the filter that will be designed based on the event-triggered updating condition of ZOH. Then, the filter performance analysis conditions are obtained under the assumption that the maximum number of packet losses is allowable for the worst-case bound. In what follows, a two-stage LMI-based alternative optimization approach is proposed to separately design the filter, which reduces the conservatism of the traditional linearization method of filter analysis conditions. Subsequently a codesign algorithm is developed to determine the communication and filter parameters simultaneously. Finally, an illustrative example is provided to verify the validity of the obtained results

    Robust Quantized Generalized H

    Get PDF
    This paper deals with the problem of robust generalized H2 filter design for uncertain discrete-time fuzzy systems with output quantization. Firstly, the outputs of the system are quantized by a memoryless logarithmic quantizer before being transmitted to a filter. Then, attention is focused on the design of a generalized H2 filter to mitigate quantization effects, such that the filtering error systems ensure the robust stability with a prescribed generalized H2 noise attenuation level. Via applying Finsler lemma to introduce some slack variables and using the fuzzy Lyapunov function, sufficient conditions for the existence of a robust generalized H2 filter are expressed in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to demonstrate the effectiveness of the proposed approach

    Short- and long-run competition of retailer pricing strategies

    Get PDF
    Retailers' pricing strategies are one of the most important determinants of the retail dynamics and the competitive structure of the retail market. Retailers use both short-term and long-term pricing strategies to optimize their market share. This study addresses several critical questions: (1) To what extent do retailers react to competitive price specials? (2) Do retailers alternate price specials of competing brands? and, (3) Can one identify stores or brands, that are price leaders or do retailers/brands set prices independently? We use cointegration analysis to estimate a model which allows us to study both the short- and the long-run dynamics of competitive prices within a single framework

    Guiding the PLMs with Semantic Anchors as Intermediate Supervision: Towards Interpretable Semantic Parsing

    Full text link
    The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task. Despite the promising performance, previous PLM-based approaches often suffer from hallucination problems due to their negligence of the structural information contained in the sentence, which essentially constitutes the key semantics of the logical forms. Furthermore, most works treat PLM as a black box in which the generation process of the target logical form is hidden beneath the decoder modules, which greatly hinders the model's intrinsic interpretability. To address these two issues, we propose to incorporate the current PLMs with a hierarchical decoder network. By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks, namely Semantic Anchor Extraction and Semantic Anchor Alignment, for training the hierarchical decoders and probing the model intermediate representations in a self-adaptive manner alongside the fine-tuning process. We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines. More importantly, by analyzing the intermediate representations of the hierarchical decoders, our approach also makes a huge step toward the intrinsic interpretability of PLMs in the domain of semantic parsing

    LawBench: Benchmarking Legal Knowledge of Large Language Models

    Full text link
    Large language models (LLMs) have demonstrated strong capabilities in various aspects. However, when applying them to the highly specialized, safe-critical legal domain, it is unclear how much legal knowledge they possess and whether they can reliably perform legal-related tasks. To address this gap, we propose a comprehensive evaluation benchmark LawBench. LawBench has been meticulously crafted to have precise assessment of the LLMs' legal capabilities from three cognitive levels: (1) Legal knowledge memorization: whether LLMs can memorize needed legal concepts, articles and facts; (2) Legal knowledge understanding: whether LLMs can comprehend entities, events and relationships within legal text; (3) Legal knowledge applying: whether LLMs can properly utilize their legal knowledge and make necessary reasoning steps to solve realistic legal tasks. LawBench contains 20 diverse tasks covering 5 task types: single-label classification (SLC), multi-label classification (MLC), regression, extraction and generation. We perform extensive evaluations of 51 LLMs on LawBench, including 20 multilingual LLMs, 22 Chinese-oriented LLMs and 9 legal specific LLMs. The results show that GPT-4 remains the best-performing LLM in the legal domain, surpassing the others by a significant margin. While fine-tuning LLMs on legal specific text brings certain improvements, we are still a long way from obtaining usable and reliable LLMs in legal tasks. All data, model predictions and evaluation code are released in https://github.com/open-compass/LawBench/. We hope this benchmark provides in-depth understanding of the LLMs' domain-specified capabilities and speed up the development of LLMs in the legal domain

    Modeling seismic wave propagation in the Loess Plateau using a viscoacoustic wave equation with explicitly expressed quality factor

    Get PDF
    The thick Quaternary loess on the Loess Plateau of China produces strong seismic attenuation, resulting in weak reflections from subsurface exploration targets. Accurately simulating seismic wavefield in the Loess Plateau is important for guiding subsequent data processing and interpretation. We present a 2D/3D wavefield simulation method for the Loess Plateau using a viscoacoustic wave equation with explicitly expressed quality factor. To take into account the effect of irregular surface, we utilize a vertically deformed grid to represent the topography, and solve the viscoacoustic wave equation in a regular computational domain that conforms to topographic surface. Grid deformation introduces the partial derivatives such as ∂vx/∂z and ∂vy/∂z in the wave equation, which is difficult to be accurately computed using traditional staggered-grid finite-difference method. To mitigate this issue, a finite-difference scheme based on a fully staggered-grid is adopted to solve the viscoacoustic wave equation. Numerical experiments for a simple layer model and 2D/3D realistic Loess Plateau models demonstrate the feasibility and adaptability of the proposed method. The 3D modeling results show comparable amplitude and waveform characteristics to the field data acquired from the Chinese Loess Plateau, suggesting a good performance of the proposed modeling method

    Preparation of a nano emodin transfersome and study on its anti-obesity mechanism in adipose tissue of diet-induced obese rats

    Get PDF
    OBJECTIVE: To describe the preparation of nano emodin transfersome (NET) and investigate its effect on mRNA expression of adipose triglyceride lipase (ATGL) and G0/G1 switch gene 2 (G0S2) in adipose tissue of diet-induced obese rats. METHODS: NET was prepared by film-ultrasonic dispersion method. The effects of emodin components at different ratios on encapsulation efficiency were investigated.The NET envelopment rate was determined by ultraviolet spectrophotometry. The particle size and Zeta potential of NET were evaluated by Zetasizer analyzer. Sixty male SD rats were assigned to groups randomly. After 8-week treatment, body weight, wet weight of visceral fat and the percentage of body fat (PBF) were measured. Fasting blood glucose and serum lipid levels were determined. The adipose tissue section was HE stained, and the cellular diameter and quantity of adipocytes were evaluated by light microscopy. The mRNA expression of ATGL and G0S2 from the peri-renal fat tissue was assayed by RT-PCR. RESULTS: The appropriate formulation was deoxycholic acid sodium salt vs. phospholipids 1:8, cholesterol vs. phospholipids 1:3, vitamin Evs. phospholipids 1:20, and emodin vs. phospholipid 1:6. Zeta potential was −15.11 mV, and the particle size was 292.2 nm. The mean encapsulation efficiency was (69.35 ± 0.25)%. Compared with the obese model group, body weight, wet weight of visceral fat, PBF and mRNA expression of G0S2 from peri-renal fat tissue were decreased significantly after NET treatment (all P < 0.05), while high-density lipoprotein cholesterol (HDL-C), the diameter of adipocytes and mRNA expression of ATGL from peri-renal fat tissue were increased significantly (all P < 0.05). CONCLUSION: The preparation method is simple and reasonable. NET with negative electricity was small and uniform in particle size, with high encapsulation efficiency and stability. NET could reduce body weight and adipocyte size, and this effect was associated with the up-regulation of ATGL, down-regulation of G0S2 expression in the adipose tissue, and improved insulin sensitivity

    Concurrent smoking and alcohol consumers had higher triglyceride glucose indices than either only smokers or alcohol consumers: a cross-sectional study in Korea

    Get PDF
    Background The triglyceride glucose (TyG) index is a noninsulin-based marker for insulin resistance (IR) in general practice. Although smoking and heavy drinking have been regarded as major risk factors for various chronic diseases, there is limited evidence regarding the combined effects of smoking and alcohol consumption on IR. This study aimed to investigate the relationship between the TyG index and smoking and alcohol consumption using two Korean population-based datasets. Methods This study included 10,568 adults in the Korean National Health and Nutrition Examination Survey (KNHANES) and 9586 adults in the Korean Initiatives on Coronary Artery Calcification (KOICA) registry datasets. Multivariate logistic analysis was conducted to explore the relationship between smoking and alcohol consumption and the TyG index. To assess the predictive value of smoking and alcohol consumption on high TyG index, the area under the curve (AUC) were compared and net reclassification improvement (NRI) and integrated discrimination improvement (IDI) analyses were derived. Results The combined effect of smoking and alcohol consumption was an independent risk factor of a higher TyG index in the KNHANES (adjusted odds ratio: 4.33, P < .001) and KOICA (adjusted odds ratio: 1.94, P < .001) datasets. Adding smoking and alcohol consumption to the multivariate logistic models improved the model performance for the TyG index in the KNHANES (AUC: from 0.817 to 0.829, P < .001; NRI: 0.040, P < .001; IDI: 0.017, P < .001) and KOICA (AUC: from 0.822 to 0.826, P < .001; NRI: 0.025, P = .006; IDI: 0.005, P < .001) datasets. Conclusions Smoking and alcohol consumption were independently associated with the TyG index. Concurrent smokers and alcohol consumers were more likely to have a TyG index that was ≥8.8 and higher than the TyG indices of non-users and those who exclusively consumed alcohol or smoking tobacco.This work was supported by the Technology Innovation Program (20002781, A Platform for Prediction and Management of Health Risk Based on Personal Big Data and Lifelogging) funded by the Ministry of Trade, Industry and Energy (MOTIE, South Korea) to JW Lee, and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) Baek et al. Lipids in Health and Disease (2021) 20:49 Page 9 of 11 (NRF-2019R1A2C1010043) to H Lee. Additionally, this work was supported by Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korean government (MSIT) (2019-31-1293), for autonomous digital companion framework and application to HJ Chan

    The effect of non-optimal lipids on the progression of coronary artery calcification in statin-naïve young adults: results from KOICA registry

    Get PDF
    BackgroundDespite the importance of attaining optimal lipid levels from a young age to secure long-term cardiovascular health, the detailed impact of non-optimal lipid levels in young adults on coronary artery calcification (CAC) is not fully explored. We sought to investigate the risk of CAC progression as per lipid profiles and to demonstrate lipid optimality in young adults.MethodsFrom the KOrea Initiative on Coronary Artery calcification (KOICA) registry that was established in six large volume healthcare centers in Korea, 2,940 statin-naïve participants aged 20–45 years who underwent serial coronary calcium scans for routine health check-ups between 2002 and 2017 were included. The study outcome was CAC progression, which was assessed by the square root method. The risk of CAC progression was analyzed according to the lipid optimality and each lipid parameter.ResultsIn this retrospective cohort (mean age, 41.3 years; men 82.4%), 477 participants (16.2%) had an optimal lipid profile, defined as triglycerides &lt;150 mg/dl, LDL cholesterol &lt;100 mg/dl, and HDL cholesterol &gt;60 mg/dl. During follow-up (median, 39.7 months), CAC progression was observed in 434 participants (14.8%), and more frequent in the non-optimal lipid group (16.5% vs. 5.7%; p &lt; 0.001). Non-optimal lipids independently increased the risk of CAC progression [adjusted hazard ratio (aHR), 1.97; p = 0.025], in a dose-dependent manner. Even in relatively low-risk participants with an initial calcium score of zero (aHR, 2.13; p = 0.014), in their 20 s or 30 s (aHR 2.15; p = 0.041), and without other risk factors (aHR 1.45; p = 0.038), similar results were demonstrable. High triglycerides had the greatest impact on CAC progression in this young adult population.ConclusionNon-optimal lipid levels were significantly associated with the risk of CAC progression in young adults, even at low-risk. Screening and intervention for non-optimal lipid levels, particularly triglycerides, from an early age might be of clinical value
    corecore