16 research outputs found

    Hateful Messages: A Conversational Data Set of Hate Speech produced by Adolescents on Discord

    Full text link
    With the rise of social media, a rise of hateful content can be observed. Even though the understanding and definitions of hate speech varies, platforms, communities, and legislature all acknowledge the problem. Therefore, adolescents are a new and active group of social media users. The majority of adolescents experience or witness online hate speech. Research in the field of automated hate speech classification has been on the rise and focuses on aspects such as bias, generalizability, and performance. To increase generalizability and performance, it is important to understand biases within the data. This research addresses the bias of youth language within hate speech classification and contributes by providing a modern and anonymized hate speech youth language data set consisting of 88.395 annotated chat messages. The data set consists of publicly available online messages from the chat platform Discord. ~6,42% of the messages were classified by a self-developed annotation schema as hate speech. For 35.553 messages, the user profiles provided age annotations setting the average author age to under 20 years old

    Association of dapagliflozin vs placebo with individual Kansas City Cardiomyopathy Questionnaire components in patients with heart failure with mildly reduced or preserved ejection fraction

    Get PDF
    Importance: Dapagliflozin has been shown to improve overall health status based on aggregate summary scores of the Kansas City Cardiomyopathy Questionnaire (KCCQ) in patients with heart failure (HF) with mildly reduced or preserved ejection fraction enrolled in the Dapagliflozin Evaluation to Improve the Lives of Patients With Preserved Ejection Fraction Heart Failure (DELIVER) trial. A comprehensive understanding of the responsiveness of individual KCCQ items would allow clinicians to better inform patients on expected changes in daily living with treatment. Objective: To examine the association of dapagliflozin treatment with changes in individual components of the KCCQ. Design, Setting, and Participants: This is a post hoc exploratory analysis of DELIVER, a randomized double-blind placebo-controlled trial conducted at 353 centers in 20 countries from August 2018 to March 2022. KCCQ was administered at randomization and 1, 4, and 8 months. Scores of individual KCCQ components were scaled from 0 to 100. Eligibility criteria included symptomatic HF with left ventricular ejection fraction greater than 40%, elevated natriuretic peptide levels, and evidence of structural heart disease. Data were analyzed from November 2022 to February 2023. Main Outcomes and Measures: Changes in the 23 individual KCCQ components at 8 months. Interventions: Dapagliflozin, 10 mg, once daily or placebo. Results: Baseline KCCQ data were available for 5795 of 6263 randomized patients (92.5%) (mean [SD] age, 71.5 [9.5] years; 3344 male [57.7%] and 2451 female [42.3%]). Dapagliflozin was associated with larger improvements in almost all KCCQ components at 8 months compared with placebo. The most significant improvements with dapagliflozin were observed in frequency of lower limb edema (difference, 3.2; 95% CI, 1.6-4.8; P < .001), sleep limitation by shortness of breath (difference, 3.0; 95% CI, 1.6-4.4; P < .001), and limitation in desired activities by shortness of breath (difference, 2.8; 95% CI, 1.3-4.3; P < .001). Similar treatment patterns were observed in longitudinal analyses integrating data from months 1, 4, and 8. Higher proportions of patients treated with dapagliflozin experienced improvements, and fewer had deteriorations across most individual components. Conclusions and Relevance: In this study of patients with HF with mildly reduced or preserved ejection fraction, dapagliflozin was associated with improvement in a broad range of individual KCCQ components, with the greatest benefits in domains related to symptom frequency and physical limitations. Potential improvements in specific symptoms and activities of daily living might be more readily recognizable and easily communicated to patients. Trial Registration: ClinicalTrials.gov Identifier: NCT03619213

    Dapagliflozin in patients with heart failure and previous myocardial infarction: A participant‐level pooled analysis of DAPA-HF and DELIVER

    Get PDF
    Aims Patients with heart failure (HF) and history of myocardial infarction (MI) face a higher risk of disease progression and clinical events. Whether sodium–glucose cotransporter 2 inhibitors may modify clinical trajectory in such individuals remains incompletely understood. Methods and results The DAPA-HF and DELIVER trials compared dapagliflozin with placebo in patients with symptomatic HF with left ventricular ejection fraction (LVEF) ≀40% and > 40%, respectively. In this pooled participant-level analysis, we assessed efficacy and safety outcomes by history of MI. The primary outcome in both trials was the composite of cardiovascular death or worsening HF. Of the total of 11 007 patients, 3731 (34%) had a previous MI and were at higher risk of the primary outcome across the spectrum of LVEF in covariate-adjusted models (hazard ratio [HR] 1.12, 95% confidence interval [CI] 1.02–1.24). Dapagliflozin reduced the risk of the primary outcome to a similar extent in patients with (HR 0.83, 95% CI 0.72–0.96) and without previous MI (HR 0.76, 95% CI 0.68–0.85; pinteraction = 0.36), with consistent benefits on key secondary outcomes as well. Serious adverse events did not occur more frequently with dapagliflozin, irrespective of previous MI. Conclusion History of MI confers increased risks of adverse cardiovascular outcomes in patients with HF across the LVEF spectrum, even among those with preserved ejection fraction. Dapagliflozin consistently and safely reduces the risk of cardiovascular death or worsening HF, regardless of previous MI

    Cryptographic Error Correction

    No full text
    It has been said that “cryptography is about concealing information, and coding theory is about revealing it. ” Despite these apparently conflicting goals, the two fields have common origins and many interesting relationships. In this thesis, we establish new connections between cryptography and coding theory in two ways: first, by applying cryptographic tools to solve classical problems from the theory of error correction; and second, by studying special kinds of codes that are motivated by cryptographic applications. In the first part of this thesis, we consider a model of error correction in which the source of errors is adversarial, but limited to feasible computation. In this model, we construct appealingly simple, general, and efficient cryptographic coding schemes which can recover from much larger error rates than schemes for classical models of adversarial noise. In the second part, we study collusion-secure fingerprinting codes, which are of fundamental importance in cryptographic applications like data watermarking and traito

    Pixelgenaue Projektorbilder fĂŒr die 360°-Kuppel

    No full text
    3D-Filme gehören lĂ€ngst zum Alltag im Kino und sorgen immer hĂ€ufiger fĂŒr hohe Besucherzahlen. Nun erobert die 3D-Technik auch den Fulldome, also 360°-ProjektionsflĂ€chen wie sie in Planetarien, Science-Centern oder VergnĂŒgungsparks eingesetzt werden. Um solche FlĂ€chen auszuleuchten, werden 16, 32 oder mehr Projektoren eingesetzt, die Einzelbilder auf die gekrĂŒmmte FlĂ€che projizieren. Gerade in den Überlappungsbereichen der Projektorbilder mĂŒssen diese möglichst pixelgenau aneinander angepasst werden, um Doppelbilder oder UnschĂ€rfe zu vermeiden

    Ontology-based entity recognition and annotation

    No full text
    The majority of transmitted information consists of written text, either printed or electronically. Extraction of this information from digital resources requires the identification of important entities. While Named Entity Recognition (NER) is an important task for the extraction of factual information and the construction of knowledge graphs, other information such as terminological concepts and relations between entities are of similar importance in the context of knowledge engineering, knowledge base enhancement and semantic search. While the majority of approaches focusses on NER recognition in the context of the World-Wide-Web and thus needs to cover the broad range of common knowledge, we focus in the present work on the recognition of entities in highly specialized domains and describe our approach to ontology-based entity recognition and annotation (OER). Our approach, implemented as a first prototype, outperforms existing approaches in precision of extracted entities, especially in the recognition of compound terms such as German Federal Ministry of Education and Research and inflected terms

    Optimal error correction against computationally bounded noise

    No full text
    Abstract. For computationally bounded adversarial models of error, we construct appealingly simple, efficient, cryptographic encoding and unique decoding schemes whose error-correction capability is much greater than classically possible. In particular: 1. For binary alphabets, we construct positive-rate coding schemes which are uniquely decodable from a 1/2 − γ error rate for any constant γ> 0. 2. For large alphabets, we construct coding schemes which are uniquely decodable from a 1 − √ R error rate for any information rate R> 0. Our results are qualitatively stronger than related work: the construction works in the public-key model (requiring no shared secret key or joint local state) and allows the channel to know everything that the receiver knows. In addition, our techniques can potentially be used to construct coding schemes that have information rates approaching the Shannon limit. Finally, our construction is qualitatively optimal: we show that unique decoding under high error rates is impossible in several natural relaxations of our model.

    Optimal error correction for computationally bounded noise

    No full text
    For adversarial but computationally bounded models of error, we construct appealingly simple and efficient cryptographic encoding and unique decoding schemes whose error-correction capability is much greater than classically possible. In particular: 1) For binary alphabets, we construct positive-rate coding schemes that are uniquely decodable under a 1/2 - Îł error rate for any constant Îł > 0. 2) For large alphabets, we construct coding schemes that are uniquely decodable under a 1 - R error rate for any information rate R > 0. Our results for large alphabets are actually optimal, since the "computationally bounded but adversarial channel" can simulate the behavior of the q-ary symmetric channel, where q denotes the size of the alphabet, the capacity of which is known to be upper-bounded by 1 - R. Our results hold under minimal assumptions on the communication infrastructure, namely: 1) we allow the channel to be more powerful than the receiver and 2) we only assume that some information about the sender-a public key-is known. (In particular, we do not require any shared secret key or joint local state between sender and receivers)

    Prior Secure Function Evaluation

    No full text
    Secure function evaluation (SFE) enables a group of players, by themselves, to evaluate a function on private inputs as securely as if a trusted third party had done it for them. A completely fair SFE is a protocol in which, conceptually, the function values are learned atomically. We provide a completely fair SFE protocol which is secure for any number of malicious players, using a novel combination of computational and physical channel assumptions. We also show how completely fair SFE has striking applications to game theory. In particular, it enables “cheaptalk” protocols that (a) achieve correlated-equilibrium payoffs in any game, (b) are the first protocols which provably give no additional power to any coalition of players, and (c) are exponentially more efficient than prior counterparts
    corecore