395 research outputs found

    Geospatial Analysis of Opioid Dispensing Patterns in California: A 2021 Real-World Study

    Get PDF
    The misuse and abuse of opioids has become a serious public health threat in the United States. The state of California has been hit particularly hard by the opioid epidemic, with a noticeable increase in opioid-related fatalities and hospitalizations. This brief report paper aims to contribute to the growing literature by conducting a geospatial analysis of opioid dispensing patterns in California in 2021. The primary objective was to identify areas characterized by high-risk opioid dispending patterns and explore possible contributing factors. This retrospective study analyzed data from over 7 million records of opioid and benzodiazepine prescriptions dispensed by outpatient pharmacies in California in 2021. A series of generalized linear regression models was employed to assess the impact of neighborhood characteristics on opioid recipients and high-risk opioid dispensing. The study defined high-risk opioid dispensing behavior as: (1) multiple provider episodes, (2) overlapping opioid prescriptions for seven or more days, (3) overlapping opioid and benzodiazepine prescriptions for seven or more days, and (4) a high standardized dosage of opioid prescriptions per month. The study identified variables associated with high-risk opioid dispensing behaviors, including age, population density, income, and housing-related variables, as well as marital status and family-related variables. The study uncovered that there are noticeable disparities in opioid dispensing among different racial and ethnic groups within California. The findings indicated a correlation of high-risk dispensing indicators with certain demographic and socioeconomic factors. There was a substantial regional variation in opioid dispensing practices, with certain rural areas having higher rates of opioid prescriptions than urban areas

    The Molecular Mechanism Of Alpha-Synuclein Dependent Regulation Of Protein Phosphatase 2A Activity

    Get PDF
    Background/Aims: Alpha-synuclein (α-Syn) is a neuronal protein that is highly implicated in Parkinson\u27s disease (PD), and protein phosphatase 2A (PP2A) is an important serine/threonine phosphatase that is associated with neurodegenerative diseases, such as PD. α-Syn can directly upregulate PP2A activity, but the underling mechanism remains unclear. Therefore, we investigated the molecular mechanism of α-Syn regulating PP2A activity. Methods: α-Syn and its truncations were expressed in E.coli, and purified by affinity chromatography. PP2A Cα and its mutants were expressed in recombinant baculovirus, and purified by affinity chromatography combined with gel filtration chromatography. The interaction between α-Syn and PP2A Cα was detected by GST pull-down assay. PP2A activity was investigated by the colorimetric assay. Results: The hydrophobic non-amyloid component (NAC) domain of α-Syn interacted with PP2A Cα and upregulated its activity. α-Syn aggregates reduced its ability to upregulate PP2A activity, since the hydrophobic domain of α-Syn was blocked during aggregation. Furthermore, in the hydrophobic center of PP2A Cα, the residue of I123 was responsible for PP2A to interact with α-Syn, and its hydrophilic mutation blocked its interaction with α-Syn as well as its activity upregulation by α-Syn. Conclusions: α-Syn bound to PP2A Cα by the hydrophobic interaction and upregulated its activity. Blocking the hydrophobic domain of α-Syn or hydrophilic mutation on the residue I123 in PP2A Cα all reduced PP2A activity upregulation by α-Syn. Overall, we explored the mechanism of α-Syn regulating PP2A activity, which might offer much insight into the basis underlying PD pathogenesis

    A Self-enhancement Approach for Domain-specific Chatbot Training via Knowledge Mining and Digest

    Full text link
    Large Language Models (LLMs), despite their great power in language generation, often encounter challenges when dealing with intricate and knowledge-demanding queries in specific domains. This paper introduces a novel approach to enhance LLMs by effectively extracting the relevant knowledge from domain-specific textual sources, and the adaptive training of a chatbot with domain-specific inquiries. Our two-step approach starts from training a knowledge miner, namely LLMiner, which autonomously extracts Question-Answer pairs from relevant documents through a chain-of-thought reasoning process. Subsequently, we blend the mined QA pairs with a conversational dataset to fine-tune the LLM as a chatbot, thereby enriching its domain-specific expertise and conversational capabilities. We also developed a new evaluation benchmark which comprises four domain-specific text corpora and associated human-crafted QA pairs for testing. Our model shows remarkable performance improvement over generally aligned LLM and surpasses domain-adapted models directly fine-tuned on domain corpus. In particular, LLMiner achieves this with minimal human intervention, requiring only 600 seed instances, thereby providing a pathway towards self-improvement of LLMs through model-synthesized training data.Comment: Work in progres

    Predicting Suicidal and Self-Injurious Events in a Correctional Setting Using AI Algorithms on Unstructured Medical Notes and Structured Data

    Get PDF
    Suicidal and self-injurious incidents in correctional settings deplete the institutional and healthcare resources, create disorder and stress for staff and other inmates. Traditional statistical analyses provide some guidance, but they can only be applied to structured data that are often difficult to collect and their recommendations are often expensive to act upon. This study aims to extract information from medical and mental health progress notes using AI algorithms to make actionable predictions of suicidal and self-injurious events to improve the efficiency of triage for health care services and prevent suicidal and injurious events from happening at California\u27s Orange County Jails. The results showed that the notes data contain more information with respect to suicidal or injurious behaviors than the structured data available in the EHR database at the Orange County Jails. Using the notes data alone (under-sampled to 50%) in a Transformer Encoder model produced an AUC-ROC of 0.862, a Sensitivity of 0.816, and a Specificity of 0.738. Incorporating the information extracted from the notes data into traditional Machine Learning models as a feature alongside structured data (under-sampled to 50%) yielded better performance in terms of Sensitivity (AUC-ROC: 0.77, Sensitivity: 0.89, Specificity: 0.65). In addition, under-sampling is an effective approach to mitigating the impact of the extremely imbalanced classes

    Instruction-following Evaluation through Verbalizer Manipulation

    Full text link
    While instruction-tuned models have shown remarkable success in various natural language processing tasks, accurately evaluating their ability to follow instructions remains challenging. Existing benchmarks primarily focus on common instructions that align well with what the model learned during training. However, proficiency in responding to these instructions does not necessarily imply strong ability in instruction following. In this paper, we propose a novel instruction-following evaluation protocol called verbalizer manipulation. It instructs the model to verbalize the task label with words aligning with model priors to different extents, adopting verbalizers from highly aligned (e.g., outputting ``postive'' for positive sentiment), to minimally aligned (e.g., outputting ``negative'' for positive sentiment). Verbalizer manipulation can be seamlessly integrated with any classification benchmark to examine the model's reliance on priors and its ability to override them to accurately follow the instructions. We conduct a comprehensive evaluation of four major model families across nine datasets, employing twelve sets of verbalizers for each of them. We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers. Even the strongest GPT-4 model struggles to perform better than random guessing on the most challenging verbalizer, emphasizing the need for continued advancements to improve their instruction-following abilities

    ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object Navigation

    Full text link
    The ability to accurately locate and navigate to a specific object is a crucial capability for embodied agents that operate in the real world and interact with objects to complete tasks. Such object navigation tasks usually require large-scale training in visual environments with labeled objects, which generalizes poorly to novel objects in unknown environments. In this work, we present a novel zero-shot object navigation method, Exploration with Soft Commonsense constraints (ESC), that transfers commonsense knowledge in pre-trained models to open-world object navigation without any navigation experience nor any other training on the visual environments. First, ESC leverages a pre-trained vision and language model for open-world prompt-based grounding and a pre-trained commonsense language model for room and object reasoning. Then ESC converts commonsense knowledge into navigation actions by modeling it as soft logic predicates for efficient exploration. Extensive experiments on MP3D, HM3D, and RoboTHOR benchmarks show that our ESC method improves significantly over baselines, and achieves new state-of-the-art results for zero-shot object navigation (e.g., 158% relative Success Rate improvement than CoW on MP3D)

    Association of glycemic variability and the presence and severity of coronary artery disease in patients with type 2 diabetes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Glucose variability is one of components of the dysglycemia in diabetes and may play an important role in development of diabetic vascular complications. The objective of this study was to assess the relationship between glycemic variability determined by a continuous glucose monitoring (CGM) system and the presence and severity of coronary artery disease (CAD) in patients with type 2 diabetes mellitus (T2DM).</p> <p>Methods</p> <p>In 344 T2DM patients with chest pain, coronary angiography revealed CAD (coronary stenosis ≥ 50% luminal diameter narrowing) in 252 patients and 92 patients without CAD. Gensini score was used to assess the severity of CAD. All participants' CGM parameters and biochemical characteristics were measured at baseline.</p> <p>Results</p> <p>Diabetic patients with CAD were older, and more were male and cigarette smokers compared with the controls. Levels of the mean amplitude of glycemic excursions (MAGE) (3.7 ± 1.4 mmol/L vs. 3.2 ± 1.2 mmol/L, p < 0.001), postprandial glucose excursion (PPGE) (3.9 ± 1.6 mmol/L vs. 3.6 ± 1.4 mmol/L, p = 0.036), serum high-sensitive C-reactive protein (hs-CRP) (10.7 ± 12.4 mg/L vs. 5.8 ± 6.7 mg/L, p < 0.001) and creatinine (Cr) (87 ± 23 mmol/L vs. 77 ± 14 mmol/L, p < 0.001) were significantly higher in patients with CAD than in patients without CAD. Gensini score closely correlated with age, MAGE, PPGE, hemoglobin A<sub>1c </sub>(HbA<sub>1c</sub>), hs-CRP and total cholesterol (TC). Multivariate analysis indicated that age (p < 0.001), MAGE (p < 0.001), serum levels of HbA<sub>1c </sub>(p = 0.022) and hs-CRP (p = 0.005) were independent determinants for Gensini score. Logistic regression analysis revealed that MAGE ≥ 3.4 mmol/L was an independent predictor for CAD. The area under the receiver-operating characteristic curve for MAGE (0.618, p = 0.001) was superior to that for HbA<sub>1c </sub>(0.554, p = 0.129).</p> <p>Conclusions</p> <p>The intraday glycemic variability is associated with the presence and severity of CAD in patients with T2DM. Effects of glycemic excursions on vascular complications should not be neglected in diabetes.</p

    Learning Stackable and Skippable LEGO Bricks for Efficient, Reconfigurable, and Variable-Resolution Diffusion Modeling

    Full text link
    Diffusion models excel at generating photo-realistic images but come with significant computational costs in both training and sampling. While various techniques address these computational challenges, a less-explored issue is designing an efficient and adaptable network backbone for iterative refinement. Current options like U-Net and Vision Transformer often rely on resource-intensive deep networks and lack the flexibility needed for generating images at variable resolutions or with a smaller network than used in training. This study introduces LEGO bricks, which seamlessly integrate Local-feature Enrichment and Global-content Orchestration. These bricks can be stacked to create a test-time reconfigurable diffusion backbone, allowing selective skipping of bricks to reduce sampling costs and generate higher-resolution images than the training data. LEGO bricks enrich local regions with an MLP and transform them using a Transformer block while maintaining a consistent full-resolution image across all bricks. Experimental results demonstrate that LEGO bricks enhance training efficiency, expedite convergence, and facilitate variable-resolution image generation while maintaining strong generative performance. Moreover, LEGO significantly reduces sampling time compared to other methods, establishing it as a valuable enhancement for diffusion models
    corecore