99 research outputs found

    Benchmarking bias mitigation algorithms in representation learning through fairness metrics

    Full text link
    Le succès des modèles d’apprentissage en profondeur et leur adoption rapide dans de nombreux domaines d’application ont soulevé d’importantes questions sur l’équité de ces modèles lorsqu’ils sont déployés dans le monde réel. Des études récentes ont mis en évidence les biais encodés par les algorithmes d’apprentissage des représentations et ont remis en cause la fiabilité de telles approches pour prendre des décisions. En conséquence, il existe un intérêt croissant pour la compréhension des sources de biais dans l’apprentissage des algorithmes et le développement de stratégies d’atténuation des biais. L’objectif des algorithmes d’atténuation des biais est d’atténuer l’influence des caractéristiques des données sensibles sur les décisions d’éligibilité prises. Les caractéristiques sensibles sont des caractéristiques privées et protégées d’un ensemble de données telles que le sexe ou la race, qui ne devraient pas affecter les décisions de sortie d’éligibilité, c’està-dire les critères qui rendent un individu qualifié ou non qualifié pour une tâche donnée, comme l’octroi de prêts ou l’embauche. Les modèles d’atténuation des biais visent à prendre des décisions d’éligibilité sur des échantillons d’ensembles de données sans biais envers les attributs sensibles des données d’entrée. La difficulté des tâches d’atténuation des biais est souvent déterminée par la distribution de l’ensemble de données, qui à son tour est fonction du déséquilibre potentiel de l’étiquette et des caractéristiques, de la corrélation des caractéristiques potentiellement sensibles avec d’autres caractéristiques des données, du décalage de la distribution de l’apprentissage vers le phase de développement, etc. Sans l’évaluation des modèles d’atténuation des biais dans diverses configurations difficiles, leurs mérites restent incertains. Par conséquent, une analyse systématique qui comparerait différentes approches d’atténuation des biais sous la perspective de différentes mesures d’équité pour assurer la réplication des résultats conclus est nécessaire. À cette fin, nous proposons un cadre unifié pour comparer les approches d’atténuation des biais. Nous évaluons différentes méthodes d’équité formées avec des réseaux de neurones profonds sur un ensemble de données synthétiques commun et un ensemble de données du monde réel pour obtenir de meilleures informations sur le fonctionnement de ces méthodes. En particulier, nous formons environ 3000 modèles différents dans diverses configurations, y compris des configurations de données déséquilibrées et corrélées, pour vérifier les limites des modèles actuels et mieux comprendre dans quelles configurations ils sont sujets à des défaillances. Nos résultats montrent que le biais des modèles augmente à mesure que les ensembles de données deviennent plus déséquilibrés ou que les attributs des ensembles de données deviennent plus corrélés, le niveau de dominance des caractéristiques des ensembles de données sensibles corrélées a un impact sur le biais, et les informations sensibles restent dans la représentation latente même lorsque des algorithmes d’atténuation des biais sont appliqués. Résumant nos contributions - nous présentons un ensemble de données, proposons diverses configurations d’évaluation difficiles et évaluons rigoureusement les récents algorithmes prometteurs d’atténuation des biais dans un cadre commun et publions publiquement cette référence, en espérant que la communauté des chercheurs le considérerait comme un point d’entrée commun pour un apprentissage en profondeur équitable.The rapid use and success of deep learning models in various application domains have raised significant challenges about the fairness of these models when used in the real world. Recent research has shown the biases incorporated within representation learning algorithms, raising doubts about the dependability of such decision-making systems. As a result, there is a growing interest in identifying the sources of bias in learning algorithms and developing bias-mitigation techniques. The bias-mitigation algorithms aim to reduce the impact of sensitive data aspects on eligibility choices. Sensitive features are private and protected features of a dataset, such as gender of the person or race, that should not influence output eligibility decisions, i.e., the criteria that determine whether or not an individual is qualified for a particular activity, such as lending or hiring. Bias mitigation models are designed to make eligibility choices on dataset samples without bias toward sensitive input data properties. The dataset distribution, which is a function of the potential label and feature imbalance, the correlation of potentially sensitive features with other features in the data, the distribution shift from training to the development phase, and other factors, determines the difficulty of bias-mitigation tasks. Without evaluating bias-mitigation models in various challenging setups, the merits of deep learning approaches to these tasks remain unclear. As a result, a systematic analysis is required to compare different bias-mitigation procedures using various fairness criteria to ensure that the final results are replicated. In order to do so, this thesis offers a single paradigm for comparing bias-mitigation methods. To better understand how these methods work, we compare alternative fairness algorithms trained with deep neural networks on a common synthetic dataset and a real-world dataset. We train around 3000 distinct models in various setups, including imbalanced and correlated data configurations, to validate the present models’ limits and better understand which setups are prone to failure. Our findings show that as datasets become more imbalanced or dataset attributes become more correlated, model bias increases, the dominance of correlated sensitive dataset features influence bias, and sensitive data remains in the latent representation even after bias-mitigation algorithms are applied. In summary, we present a dataset, propose multiple challenging assessment scenarios, rigorously analyse recent promising bias-mitigation techniques in a common framework, and openly disclose this benchmark as an entry point for fair deep learning

    Supervised Learning for Multi-Domain Text Classification

    Get PDF
    Digital information available on the Internet is increasing day by day. As a result of this, the demand for tools that help people in finding and analyzing all these resources are also growing in number. Text Classification, in particular, has been very useful in managing the information. Text Classification is the process of assigning natural language text to one or more categories based on the content. It has many important applications in the real world. For example, finding the sentiment of the reviews, posted by people on restaurants, movies and other such things are all applications of Text classification. In this project, focus has been laid on Sentiment Analysis, which identifies the opinions expressed in a piece of text. It involves categorizing opinions in text into categories like \u27positive\u27 or \u27negative\u27. Existing works in Sentiment Analysis focused on determining the polarity (Positive or negative) of a sentence. This comes under binary classification, which means classifying the given set of elements into two groups. The purpose of this research is to address a different approach for Sentiment Analysis called Multi Class Sentiment Classification. In this approach the sentences are classified under multiple sentiment classes like positive, negative, neutral and so on. Classifiers are built on the Predictive Model, that consists of multiple phases. Analysis of different sets of features on the data set, like stemmers, n-grams, tf-idf and so on, will be considered for classification of the data. Different classification models like Bayesian Classifier, Random Forest and SGD classifier are taken into consideration for classifying the data and their results are compared. Frameworks like Weka, Apache Mahout and Scikit are used for building the classifiers

    Modeling Residual Stress Development in Hybrid Processing by Additive Manufacturing and Laser Shock Peening

    Get PDF
    The term “hybrid” has been widely applied to many areas of manufacturing. Naturally, that term has found a home in additive manufacturing as well. Hybrid additive manufacturing or hybrid-AM has been used to describe multi-material printing, combined machines (e.g., deposition printing and milling machine center), and combined processes (e.g., printing and interlayer laser re-melting). The capabilities afforded by hybrid-AM are rewriting the design rules for materials and adding a new dimension in the design for additive manufacturing paradigm. This work focuses on hybrid-AM processes, which are defined as the use of additive manufacturing (AM) with one or more secondary processes or energy sources that are fully coupled and synergistically affect part quality, functionality, and/or process performance. Secondary processes and energy sources include subtractive and transformative manufacturing technologies, such as machining, re-melting, peening, rolling, and friction stir processing. Of particular interest to this research is combining additive manufacturing with laser shock peening (LSP) in a cyclic process chain to print 3D mechanical properties. Additive manufacturing of metals often results in parts with unfavorable mechanical properties. Laser shock peening is a high strain rate mechanical surface treatment that hammers a work piece and induces favorable mechanical properties. Peening strain hardens a surface and imparts compressive residual stresses improving the mechanical properties of the material. The overarching objective of this work is to investigate the role LSP has on layer-by-layer processing of 3D printed metals. As a first study in this field, this thesis primarily focuses on the following: (1) defining hybrid-AM in relation to hybrid manufacturing and classifying hybrid-AM processes and (2) modeling hybrid-AM by LSP to understand the role of hybrid process parameters on temporal and spatial residual stress development. A finite element model was developed to help understand thermal and mechanical cancellation of residual stress when cyclically coupling printing and peening. Results indicate layer peening frequency is a critical process parameter and highly interdependent on the heat generated by the printing laser source. Optimum hybrid process conditions were found to exists that favorably enhance mechanical properties. With this demonstration, hybrid-AM has ushered in the next evolutionary step in additive manufacturing and has the potential to profoundly change the way high value metal goods are manufactured. Advisor: Michael P. Seal

    PROCESS OPTIMIZATION, FORMULATION AND EVALUATION OF HYDROGEL {GUARGUM-G-POLY (ACRYLAMIDE)} BASED DOXOFYLLINE MICROBEADS

    Get PDF
    Objective: The objective of the present study was to improve the physical and chemical properties of natural polymers and to reduce the cost of product by graft copolymerization techniques using a natural polymer (Guar gum) and a synthetic polymer {poly (acrylamide)}. The optimized formulation of hydrogel was formulated as microbeads and loaded with Doxofylline and characterized with different parameters.Methods: Graft copolymer of guar gum-g-poly (acrylamide) was prepared by free radical polymerization technique in a specially designed jacked reaction vessel under constant flow of nitrogen. To initiate the reaction, Ceric ammonium nitrate (CAN) was used as reaction initiator. The graft co-polymer was characterised by using FTIR, TGA, and SEM. Polymeric blend beads of the grafted copolymer with sodium alginate were prepared by cross linking with calcium chloride in ionic gelation method and used to deliver a model new generation anti asthmatic drug, Doxofylline. Preparation condition of beads was optimized by considering the percentage entrapment efficiency, particle size, swelling capacity of beads in different PH conditions and their release data.Results: The formation of grafted copolymers is confirmed by FTIR studies and TGA studies showed a comparatively higher thermal stability of grafted copolymer. The pAAm-g-GG/sodium alginate microbeads were almost spherical in shape as indicated by the SEM studies. Swelling index was found to be maximum in Phosphate buffer PH 7.4 and minimum in Phosphate buffer PH 9.2. Release of doxofylline was found to be in a controlled manner with increasing polyacrylamide content in the copolymer and sodium alginate content in microbeads and higher release was observed in PH 7.4 medium than that of PH 1.2. In vitro release kinetics of doxofylline from the polymeric beads followed Higuchi kinetics model.Conclusion: Hydrogel based Doxofylline microbeads were successfully developed by using optimized batches of Guar gum-g-poly (acrylamide) and sodium alginate by free radical ionization technique. All the characterization parameters came under acceptance criteria.  Key words: Hydrogel, Microbeads, Guar gum, Acrylamide, Sodium alginat

    USING MACHINE LEARNING TECHNIQUES FOR FINDING MEANINGFUL TRANSCRIPTS IN PROSTATE CANCER PROGRESSION

    Get PDF
    Prostate Cancer is one of the most common types of cancer among Canadian men. Next generation sequencing that uses RNA-Seq can be valuable in studying cancer, since it provides large amounts of data as a source for information about biomarkers. For these reasons, we have chosen RNA-Seq data for prostate cancer progression in our study. In this research, we propose a new method for finding transcripts that can be used as genomic features. In this regard, we have gathered a very large amount of transcripts. There are a large number of transcripts that are not quite relevant, and we filter them by applying a feature selection algorithm. The results are then processed through a machine learning technique for classification such as the support vector machine which is used to classify different stages of prostate cancer. Finally, we have identified potential transcripts associated with prostate cancer progression. Ideally, these transcripts can be used for improving diagnosis, treatment, and drug development

    Clinical profile of heart failure in Beta-Thalassaemia Major (Ăź-TM): Case studies with current consideration and future perspectives

    Get PDF
    Background: Cardiac involvement is a major cause of mortality in Beta-Thalassaemia Major (Ăź-TM) patients. Despite many advances in therapeutic management of Ăź-TM, cardiac involvement remains the primary cause of mortality in ~70% of the cases. Chronic iron overloading results in thalassaemic cardiomyopathy, leading to diastolic dysfunction and overt heart failure (HF). Serial electrocardiography (ECG), 2D-echocardiography (2DECHO) and cardiovascular magnetic resonance (CMR) help in early detection and risk stratification of Ăź-TM patients, to prevent complications, such as arrhythmias and sudden cardiac death. An established network of care between thalassaemia centres and local health providers is essential for optimal management. Case presentation: We report 2 cases of HF in Ăź-TM of varied etiology, and different approaches undertaken for its early diagnosis and treatment. Conclusion: It is important to differentiate various phenotypes of cardiomyopathy in Ăź-TM. Since, the management of each varies accordingly. Ăź-TM patients require a multi-disciplinary approach that includes HF specialists, haematologist, hepatologist, endocrinologist, psychologist, transfusion experts and nursing personnel to maximise benefits from the application of the modern HF therapeutic strategies in evaluation, monitoring and treatment. SAHeart 2022;19:14-1

    Toxicity to immune checkpoint inhibitors presenting as pulmonary arterial vasculopathy and rapidly progressing right ventricular dysfunction

    Get PDF
    Introduction: Immune Checkpoint Inhibitors (ICIs) are antitumor drugs associated with a number of serious immune-related adverse events (IRAEs). ICIs enhance anti-tumor immunity, thereby  energized  patient's immune system to fight cancer. IRAEs may affect functions of various organs, including heart, and may lead to morbidity and, to some  extent  mortality. Left ventricle (LV) myocarditis with dysfunction is a known side effect of this class of drugs. However, right ventricle (RV) myocarditis and pulmonary vasculitis are an unknown entity and has not been previously reported.  Here, we present the first case of  IRAEs causing selective RV involvement with dysfunctions, attributed to immune checkpoint inhibitors described till date in medical literature.Presentation of Case: A 58-year male presented  with history of low-grade fever and  weight loss. On palpation, he had diffuse cervical lymphadenopathy. Histopathology evaluation of  lymph node revealed  metastatic lesions of Renal Cell Carcinoma (RCC).Conclusion: Fatal cardiovascular adverse events can occur as a side effect of ICI. The combination of RV myocarditis with progressive pulmonary hypertension is fatal. Treatment with high dose corticosteroids and immunomodulators may help in patient survival. Physicians treating patients with ICIs should be aware of their lethal cardiotoxic side effects  to reduce adverse cardiac outcomes. Because the number of patients exposed to this new immune therapy is expected to increase remarkably in the near future, our study encourages further work to define guidelines for cardiovascular monitoring and management

    Cloud Based Student Repository System

    Get PDF
    Learning through research brings better outcome. In this project, our main motive is to provide a flexible web developed OPAC (Online Public Access Catalogue) for users to gain allusion of projects which is already being exist in the Catalogue. For a developer learning with references helps to design desired outcome for that we are providing a complete erudition of the enduring project by the organization through OPAC. The users are able to upload the video and documents related to the project and also can scrutinize the existed projects. For that different framework are used such as python flask, Azure cloud, Collaborative Filtering etc. These frameworks are able to store and provide better methodology of learning. Therefore, this paper aim-at providing simple interface for gathering information regarding designing of project

    Evaluation of role of heart rate variability with holter monitoring in chronic kidney disease

    Get PDF
    Background: Chronic kidney disease is prevalent disease even in absence of diabetes and hypertension in 12% adults over 65 yrs of age. Autonomic imbalance is not studied in detail which could be a risk factor for chronic kidney disease.Methods: This Study was observational study in a tertiary care Hospital in pune, india and was conducted for a period of 1 year with sample size of 52. All subjects were known cases of chronic kidney disease from stage III to VD. All individuals of age >18yrs and eGFR ≤60ml/min/1.73m2 according to CKD- EPI equation were included in the study and who were not giving consent were excluded. 24 hrs Holter monitoring was done in stages from ckd stages III to V, for ckd stage VD on both Hemodialysis day and Non hemodialysis. Analysis was done using SPSS version 20 (IBM SPSS Statistics Inc., Chicago, Illinois, USA) Windows software program. The paired t test, analysis of variance (ANOVA) and Chi-square test were used. Level of significance was set at p≤0.05.Results: In this study when Heart rate variability (HRV) parameters were compared in different stages of ckd from stage III to VD (on Hemodialysis day) SDNN, SDNN Index were found to be statistically significant and on non Hemodialysis day SDNN Index was found to be statistically significant. In each subgroup of ckd stage V when diabetic subjects were compared with non-diabetic subjects, HRV parameters like ratio of P/S which was found to be low and significant in ckd stage V diabetic subjects.Conclusions: Chronic kidney disease itself can affect the HRV parameters. Causal relationship between HRV and chronic kidney diseases can be vice versa and further needs larger and prospective studies
    • …
    corecore