241 research outputs found

    Judgment on Unfair Competition Dispute Between Baidu Online Network Technology (Beijing) Ltd. Co. and Beijing 3721 Technology Ltd. Co.

    Get PDF
    On October 20, 2003, Baidu Online Network Technology (Beijing) Ltd., Co. (“Baidu”), a Nasdaq-listed company known as the “Google of China,” filed a suit against its competitor Beijing 3721 Technology Ltd. Co. (“3721”) in Beijing Chaoyang District Court for copyright infringement and unfair competition. The case is regarded as China’s first copyright-infringement dispute involving website search-engine technology. Legal experts, the Chinese media, and the Supreme Court of China have paid close attention to the case, especially as it is related to China’s ongoing legislative effort to improve protection of intellectual property. The translation below is the appellate opinion in this case issued by Beijing No. 2 Intermediate People’s Court in April 2004

    Improving Contrastive Learning of Sentence Embeddings with Focal-InfoNCE

    Full text link
    The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman's correlation and representation alignment and uniformity.Comment: Findings of emnlp 202

    A Machine Learning and Computer Vision Application to Robustly Extract Winnings from Multiple Lottery Tickets in One Shot

    Get PDF
    Mega Millions and Powerball are among the most popular American lottery games. This article provides a practical software application that can conveniently examine and evaluate several lottery tickets for prizes using just the images. The application accepts as input a directory containing the images of lottery tickets and utilizes machine learning and computer vision to extract lottery ticket data, lottery name, lottery draw date, 5-digit lottery numbers, 2-digit lottery "ball" numbers, and the lottery multiplier. The application also retrieves winning lottery data that corresponds to the lottery draw date using a public database API. This is compared with data collected from each lottery ticket image to establish matches, and the corresponding prize amount is computed. The current version of the application supports GPU usage, and image orientation has no impact on its functionality.  It is believed that a considerable portion of the U.S. public participating in the Powerball and Mega Millions lotteries will find such an application beneficial and handy

    A Machine Learning and Computer Vision Application to Robustly Extract Winnings from Multiple Lottery Tickets in One Shot

    Get PDF
    Mega Millions and Powerball are among the most popular American lottery games. This article provides a practical software application that can conveniently examine and evaluate several lottery tickets for prizes using just the images. The application accepts as input a directory containing the images of lottery tickets and utilizes machine learning and computer vision to extract lottery ticket data, lottery name, lottery draw date, 5-digit lottery numbers, 2-digit lottery "ball" numbers, and the lottery multiplier. The application also retrieves winning lottery data that corresponds to the lottery draw date using a public database API. This is compared with data collected from each lottery ticket image to establish matches, and the corresponding prize amount is computed. The current version of the application supports GPU usage, and image orientation has no impact on its functionality.  It is believed that a considerable portion of the U.S. public participating in the Powerball and Mega Millions lotteries will find such an application beneficial and handy

    Study on carrier flotation of long flame coal

    Get PDF
    In order to improve the flotation effect of long flame coal and the utilization rate of low rank coal, the mechanism of long flame coal by carrier flotation was revealed. In this study, carrier flotation tests were conducted using long flame coal obtained from Shaanxi Yujialiang Coal Preparation Plant as flotation feed and −1.3 g/cm3 anthracite from Shanghai Miao Coal Preparation Plant in Inner Mongolia as the carrier. The XRD and particle size analysis of long-flame coal revealed high fine content and ash content, with gangue minerals mainly composed of quartz and kaolin. The existence of fine coal slime and clay minerals led to the production of fine slime cap and mechanical entraining, resulting in a poor flotation effect. The effect of carrier size and proportion on the flotation effect of long flame coal was investigated. The results indicated that the cleaned coal yield decreased with the decreasing carrier particle size, and that the cleaned coal yield increased by 3.89% while the cleaned coal ash content decreased by 0.17% when 0.5~0.25 mm anthracite was used as the carrier. Furthermore, the carrier recovery rate of flotation cleaned coal reached 98.48%, with the carrier effectively recovered when the carrier proportion was 10∶1. Additionally, the influence of long flame coal slime size on carrier flotation was explored. The results revealed that 0.5~0.25 mm carrier could effectively improve the flotation effect of −0.045 mm long flame coal, with the cleaned coal yield increasing by 9.31% compared to the single flotation of −0.045 mm slime. In conclusion, carrier flotation mainly improves the flotation effect of fine slime. SEM, floc image analysis, and EDLVO theoretical calculation demonstrated that the −0.045 mm long-flame coal adhered to the carrier surface through the hydrophobic force and formed a large number of flocs, thus improving the flotation effect of long-flame coal

    Study designs and statistical methods for pharmacogenomics and drug interaction studies

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Adverse drug events (ADEs) are injuries resulting from drug-related medical interventions. ADEs can be either induced by a single drug or a drug-drug interaction (DDI). In order to prevent unnecessary ADEs, many regulatory agencies in public health maintain pharmacovigilance databases for detecting novel drug-ADE associations. However, pharmacovigilance databases usually contain a significant portion of false associations due to their nature structure (i.e. false drug-ADE associations caused by co-medications). Besides pharmacovigilance studies, the risks of ADEs can be minimized by understating their mechanisms, which include abnormal pharmacokinetics/pharmacodynamics due to genetic factors and synergistic effects between drugs. During the past decade, pharmacogenomics studies have successfully identified several predictive markers to reduce ADE risks. While, pharmacogenomics studies are usually limited by the sample size and budget. In this dissertation, we develop statistical methods for pharmacovigilance and pharmacogenomics studies. Firstly, we propose an empirical Bayes mixture model to identify significant drug-ADE associations. The proposed approach can be used for both signal generation and ranking. Following this approach, the portion of false associations from the detected signals can be well controlled. Secondly, we propose a mixture dose response model to investigate the functional relationship between increased dimensionality of drug combinations and the ADE risks. Moreover, this approach can be used to identify high-dimensional drug combinations that are associated with escalated ADE risks at a significantly low local false discovery rates. Finally, we proposed a cost-efficient design for pharmacogenomics studies. In order to pursue a further cost-efficiency, the proposed design involves both DNA pooling and two-stage design approach. Compared to traditional design, the cost under the proposed design will be reduced dramatically with an acceptable compromise on statistical power. The proposed methods are examined by extensive simulation studies. Furthermore, the proposed methods to analyze pharmacovigilance databases are applied to the FDA’s Adverse Reporting System database and a local electronic medical record (EMR) database. For different scenarios of pharmacogenomics study, optimized designs to detect a functioning rare allele are given as well

    Designing An Instrument For Gauging Equity Literacy

    Get PDF
    Equity literacy refers to the skills and mindsets needed to recognize, respond to, and redress conditions that deny equitable access to education. It involves understanding how identities such as ethnicity, gender, sexual orientation, language, religion, immigration status, and disability intersect and contribute to class inequities. More than mere awareness, equity literacy demands a commitment to deepening individual and institutional understandings of the dynamics of equity and injustice within organizations and communities. Its goal is to pinpoint disparities, eradicate inequities, and actively foster a culture of equity. Evaluating equity literacy is essential to understand how educational disparities impact access to equitable opportunities free from bias and discrimination. Considering the existing deficiency in tools for assessing equity literacy, this study introduces a survey instrument designed to assess equity literacy in educational institutions. This survey was developed based on Gorski's equity literacy framework (2016). To establish its validity, the survey was reviewed by experts and refined using Lawshe’s Content Validity Ratio (CVR). Items with CVR scores below the established threshold were removed. The revised 20-item survey was administered to 34 individuals to assess reliability using Cronbach’s alpha. The survey demonstrated robust reliability with an alpha of 0.87. Additionally, the survey categorizes total scores into four rubric levels of equity literacy: exceptional, fair, developing, and little/none. This survey serves as a foundational tool for implementing this framework,  thus empowering educators to challenge prevailing mindsets and cultural deficits
    corecore