249 research outputs found

    Characteristics of predictor sets found using differential prioritization

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Feature selection plays an undeniably important role in classification problems involving high dimensional datasets such as microarray datasets. For filter-based feature selection, two well-known criteria used in forming predictor sets are relevance and redundancy. However, there is a third criterion which is at least as important as the other two in affecting the efficacy of the resulting predictor sets. This criterion is the degree of differential prioritization (DDP), which varies the emphases on relevance and redundancy depending on the value of the DDP. Previous empirical works on publicly available microarray datasets have confirmed the effectiveness of the DDP in molecular classification. We now propose to establish the fundamental strengths and merits of the DDP-based feature selection technique. This is to be done through a simulation study which involves vigorous analyses of the characteristics of predictor sets found using different values of the DDP from toy datasets designed to mimic real-life microarray datasets.</p> <p>Results</p> <p>A simulation study employing analytical measures such as the distance between classes before and after transformation using principal component analysis is implemented on toy datasets. From these analyses, the necessity of adjusting the differential prioritization based on the dataset of interest is established. This conclusion is supported by comparisons against both simplistic rank-based selection and state-of-the-art equal-priorities scoring methods, which demonstrates the superiority of the DDP-based feature selection technique. Reapplying similar analyses to real-life multiclass microarray datasets provides further confirmation of our findings and of the significance of the DDP for practical applications.</p> <p>Conclusion</p> <p>The findings have been achieved based on analytical evaluations, not empirical evaluation involving classifiers, thus providing further basis for the usefulness of the DDP and validating the need for unequal priorities on relevance and redundancy during feature selection for microarray datasets, especially highly multiclass datasets.</p

    Molecular Science for Drug Development and Biomedicine

    Get PDF
    With the avalanche of biological sequences generated in the postgenomic age, molecular science is facing an unprecedented challenge, i.e., how to timely utilize the huge amount of data to benefit human beings. Stimulated by such a challenge, a rapid development has taken place in molecular science, particularly in the areas associated with drug development and biomedicine, both experimental and theoretical. The current thematic issue was launched with the focus on the topic of “Molecular Science for Drug Development and Biomedicine”, in hopes to further stimulate more useful techniques and findings from various approaches of molecular science for drug development and biomedicine

    Hybrid and Electric Vehicles Optimal Design and Real-time Control based on Artificial Intelligence

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

    Full text link
    There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence

    Neural Correlates of Attention Bias in Posttraumatic Stress Disorder: A fMRI Study

    Get PDF
    Attention biases to trauma-related information contribute to symptom maintenance in Posttraumatic Stress Disorder (PTSD); this phenomenon has been observed through various behavioral studies, although findings from studies using a precise, direct bias task, the dot probe, have been mixed. PTSD neuroimaging studies have indicated atypical function in specific brain regions involved with attention bias; when viewing emotionally-salient cues or engaging in tasks that require attention, individuals with PTSD have demonstrated altered activity in brain regions implicated in cognitive control and attention allocation, including the medial prefrontal cortex (mPFC), dorsolateral prefrontal cortex (dlPFC) and amygdala. However, remarkably few PTSD neuroimaging studies have employed tasks that both measure attentional strategies being engaged and include emotionally-salient information. In the current study of attention biases in highly traumatized African-American adults, a version of the dot probe task that includes stimuli that are both salient (threatening facial expressions) and relevant (photographs of African-American faces) was administered to 19 participants with and without PTSD during functional magnetic resonance imaging (fMRI). I hypothesized that: 1) individuals with PTSD would show a significantly greater attention bias to threatening faces than traumatized controls; 2) PTSD symptoms would be associated with a significantly greater attentional bias toward threat expressed in African-American, but not Caucasian, faces; 3) PTSD symptoms would be significantly associated with abnormal activity in the mPFC, dlPFC, and amygdala during presentation of threatening faces. Behavioral data did not provide evidence of attentional biases associated with PTSD. However, increased activation in the dlPFC and regions of the mPFC in response to threat cues was found in individuals with PTSD, relative to traumatized controls without PTSD; this may reflect hyper-engaged cognitive control, attention, and conflict monitoring resources in these individuals. Additionally, viewing threat in same-race, both not other-race, faces was associated with increased activation in the mPFC. These findings have important theoretical and treatment implications, suggesting that PTSD, particularly in those individuals who have experienced chronic or multiple types of trauma, may be characterized less by top-down “deficits” or failures, but by imbalanced neurobiological and cognitive systems that become over-engaged in order to “control” the emotional disruption caused by trauma-related triggers

    Emotions in Design-Based Learning

    Get PDF

    Limitations of Fairness in Machine Learning

    Get PDF
    Die Frage des sozial verantwortlichen maschinellen Lernens ist so dringlich wie nie zuvor. Ein ganzer Bereich des maschinellen Lernens hat es sich zur Aufgabe gemacht, die gesellschaftlichen Aspekte automatisierter Entscheidungssysteme zu untersuchen und technische Lösungen für algorithmische Fairness bereitzustellen. Jeder Versuch, die Fairness von Algorithmen zu verbessern, muss jedoch unter dem Blickwinkel eines möglichen gesellschaftlichen Schadens untersucht werden. In dieser Arbeit untersuchen wir bestehende Ansätze für faire Klassifikationsverfahren und beleuchten deren unterschiedliche Einschränkungen. Als Erstes zeigen wir, dass Relaxierungen von Fairness, die verwendet werden, um den Lernprozess von fairen Modellen zu vereinfachen, zu grob sind, da der endgültige Klassifikator unfair sein kann, obwohl die relaxierte Bedingung erfüllt ist. Als Antwort darauf schlagen wir eine neue und beweisbar faire Methode vor, die die Fairness-Relaxierungen in einer stark konvexen Formulierung wiederverwendet. Zweitens beobachten wir ein erhöhtes Bewusstsein für geschützte Merkmale wie Rasse oder Geschlecht in der letzten Schicht tiefer neuronaler Netze, wenn wir sie für faire Ergebnisse regularisieren. Auf Basis dieser Beobachtung konstruieren wir ein neuronales Netz, das die Eingabepunkte wegen geschützter persönlicher Merkmale explizit unterschiedlich behandelt. Mit dieser expliziten Formulierung können wir die Vorhersagen eines fairen neuronalen Netzwerks replizieren. Wir behaupten, dass sowohl das faire neuronale Netzwerk als auch die explizite Formulierung Disparate Treatment aufzeigen---eine Form der Diskriminierung in vielen Antidiskriminierungsgesetzen. Drittens betrachten wir die Fairness-Eigenschaften des Mehrheitsvotums - einer beliebten Ensemble-Methode zur Aggregation mehrerer Modelle maschinellen Lernens. Wir untersuchen algorithmisch Worst-Case-Garantien für die Fairness des Mehrheitsvotums, wenn es aus mehreren Klassifikatoren besteht, die selbst schon fair sind. Unter starken Unabhängigkeitsannahmen an die Klassifikatoren können wir ein faires Mehrheitsvotum garantieren. Ohne irgendwelche Annahmen an die Klassifikatoren kann ein faires Mehrheitsvotum im Allgemeinen nicht garantiert werden, aber es sind verschiedene Fairness-Regime möglich: Einerseits kann die Verwendung fairer Klassifikatoren die Fairness-Garantien für den Worst-Case verbessern. Andererseits kann es sein, dass das Mehrheitsvotum überhaupt nicht fair ist.The issue of socially responsible machine learning has never been more pressing. An entire field of machine learning is dedicated to investigating the societal aspects of automated decision-making systems and providing technical solutions for algorithmic fairness. However, any attempt to improve the fairness of algorithms must be examined under the lens of potential societal harm. In this thesis, we study existing approaches to fair classification and shed light on their various limitations. First, we show that relaxations of fairness constraints used to simplify the learning process of fair models are too coarse, since the final classifier may be distinctly unfair even though the relaxed constraint is satisfied. In response, we propose a new and provably fair method that incorporates the fairness relaxations in a strongly convex formulation. Second, we observe an increased awareness of protected attributes such as race or gender in the last layer of deep neural networks when we regularize them for fair outcomes. Based on this observation, we construct a neural network that explicitly treats input points differently because of protected personal characteristics. With this explicit formulation, we can replicate the predictions of a fair neural network. We argue that both the fair neural network and the explicit formulation demonstrate disparate treatment-a form of discrimination in anti-discrimination laws. Third, we consider fairness properties of the majority vote-a popular ensemble method for aggregating multiple machine learning models to obtain more accurate and robust decisions. We algorithmically investigate worst-case fairness guarantees of the majority vote when it consists of multiple classifiers that are themselves already fair. Under strong independence assumptions on the classifiers, we can guarantee a fair majority vote. Without any assumptions on the classifiers, a fair majority vote cannot be guaranteed in general, but different fairness regimes are possible: on the one hand, using fair classifiers may improve the worst-case fairness guarantees. On the other hand, the majority vote may not be fair at all
    • …
    corecore