10 research outputs found

    Fair Meta-Learning: Learning How to Learn Fairly

    Full text link
    Data sets for fairness relevant tasks can lack examples or be biased according to a specific label in a sensitive attribute. We demonstrate the usefulness of weight based meta-learning approaches in such situations. For models that can be trained through gradient descent, we demonstrate that there are some parameter configurations that allow models to be optimized from a few number of gradient steps and with minimal data which are both fair and accurate. To learn such weight sets, we adapt the popular MAML algorithm to Fair-MAML by the inclusion of a fairness regularization term. In practice, Fair-MAML allows practitioners to train fair machine learning models from only a few examples when data from related tasks is available. We empirically exhibit the value of this technique by comparing to relevant baselines.Comment: arXiv admin note: substantial text overlap with arXiv:1908.0909

    Consistent Range Approximation for Fair Predictive Modeling

    Full text link
    This paper proposes a novel framework for certifying the fairness of predictive models trained on biased data. It draws from query answering for incomplete and inconsistent databases to formulate the problem of consistent range approximation (CRA) of fairness queries for a predictive model on a target population. The framework employs background knowledge of the data collection process and biased data, working with or without limited statistics about the target population, to compute a range of answers for fairness queries. Using CRA, the framework builds predictive models that are certifiably fair on the target population, regardless of the availability of external data during training. The framework's efficacy is demonstrated through evaluations on real data, showing substantial improvement over existing state-of-the-art methods

    Utilization of Artificial Intelligence in the Digital Marketing of SMEs

    Get PDF
    Objectives The main objectives of this study were to investigate and determine the most effective methods of utilizing artificial intelligence in the digital marketing of small and medium-sized enterprises, as well as analyze any potential benefits and drawbacks that may be prevalent. Summary The utilization of artificial intelligence in SMEs is becoming increasingly important, as it allows for smaller companies to compete with large, prominent companies with much higher amounts of available resources. To find the most effective implementations of AI in digital marketing strategies, interviews with industry professionals were conducted to record real-world experiences in the field. Conclusions As the digitalization of the modern world continues to grow, it is paramount that companies adapt and tailor their marketing strategies to follow suit. Utilizing AI allows for increased efficiency, cheaper costs, and stronger consumer targeting, which drastically improves business performance. By implementing AI solutions, SMEs can effectively compete with larger enterprises, despite a large disadvantage in budget and resources. Due to this, SMEs should look towards adopting AI in their digital marketing strategies to maximize performance and sustainability, as well as avoiding falling behind competitors

    Exploring Diversity and Fairness in Machine Learning

    Get PDF
    With algorithms, artificial intelligence, and machine learning becoming ubiquitous in our society, we need to start thinking about the implications and ethical concerns of new machine learning models. In fact, two types of biases that impact machine learning models are social injustice bias (bias created by society) and measurement bias (bias created by unbalanced sampling). Biases against groups of individuals found in machine learning models can be mitigated through the use of diversity and fairness constraints. This dissertation introduces models to help humans make decisions by enforcing diversity and fairness constraints. This work starts with a call to action. Bias is rife in hiring, and since algorithms are being used in multiple companies to filter applicants, we need to pay special attention to this application. Inspired by this hiring application, I introduce new multi-armed bandit frameworks to help assign human resources in the hiring process while enforcing diversity through a submodular utility function. These frameworks increase diversity while using less resources compared to original admission decisions of the Computer Science graduate program at the University of Maryland. Moving outside of hiring I present a contextual multi-armed bandit algorithm that enforces group fairness by learning a societal bias term and correcting for it. This algorithm is tested on two real world datasets and shows marked improvement over other in-use algorithms. Additionally I take a look at fairness in traditional machine learning domain adaptation. I provide the first theoretical analysis of this setting and test the resulting model on two deal world datasets. Finally I explore extensions to my core work, delving into suicidality, comprehension of fairness definitions, and student evaluations
    corecore