369,367 research outputs found

    Smart Home and Artificial Intelligence as Environment for the Implementation of New Technologies

    Get PDF
    The technologies of a smart home and artificial intelligence (AI) are now inextricably linked. The perception and consideration of these technologies as a single system will make it possible to significantly simplify the approach to their study, design and implementation. The introduction of AI in managing the infrastructure of a smart home is a process of irreversible close future at the level with personal assistants and autopilots. It is extremely important to standardize, create and follow the typical models of information gathering and device management in a smart home, which should lead in the future to create a data analysis model and decision making through the software implementation of a specialized AI. AI techniques such as multi-agent systems, neural networks, fuzzy logic will form the basis for the functioning of a smart home in the future. The problems of diversity of data and models and the absence of centralized popular team decisions in this area significantly slow down further development. A big problem is a low percentage of open source data and code in the smart home and the AI when the research results are mostly unpublished and difficult to reproduce and implement independently. The proposed ways of finding solutions to models and standards can significantly accelerate the development of specialized AIs to manage a smart home and create an environment for the emergence of native innovative solutions based on analysis of data from sensors collected by monitoring systems of smart home. Particular attention should be paid to the search for resource savings and the profit from surpluses that will push for the development of these technologies and the transition from a level of prospect to technology exchange and the acquisition of benefits.The technologies of a smart home and artificial intelligence (AI) are now inextricably linked. The perception and consideration of these technologies as a single system will make it possible to significantly simplify the approach to their study, design and implementation. The introduction of AI in managing the infrastructure of a smart home is a process of irreversible close future at the level with personal assistants and autopilots. It is extremely important to standardize, create and follow the typical models of information gathering and device management in a smart home, which should lead in the future to create a data analysis model and decision making through the software implementation of a specialized AI. AI techniques such as multi-agent systems, neural networks, fuzzy logic will form the basis for the functioning of a smart home in the future. The problems of diversity of data and models and the absence of centralized popular team decisions in this area significantly slow down further development. A big problem is a low percentage of open source data and code in the smart home and the AI when the research results are mostly unpublished and difficult to reproduce and implement independently. The proposed ways of finding solutions to models and standards can significantly accelerate the development of specialized AIs to manage a smart home and create an environment for the emergence of native innovative solutions based on analysis of data from sensors collected by monitoring systems of smart home. Particular attention should be paid to the search for resource savings and the profit from surpluses that will push for the development of these technologies and the transition from a level of prospect to technology exchange and the acquisition of benefits

    Fairness and Diversity in Information Access Systems

    Full text link
    Among the seven key requirements to achieve trustworthy AI proposed by the High-Level Expert Group on Artificial Intelligence (AI-HLEG) established by the European Commission (EC), the fifth requirement ("Diversity, non-discrimination and fairness") declares: "In order to achieve Trustworthy AI, we must enable inclusion and diversity throughout the entire AI system's life cycle. [...] This requirement is closely linked with the principle of fairness". In this paper, we try to shed light on how closely these two distinct concepts, diversity and fairness, may be treated by focusing on information access systems and ranking literature. These concepts should not be used interchangeably because they do represent two different values, but what we argue is that they also cannot be considered totally unrelated or divergent. Having diversity does not imply fairness, but fostering diversity can effectively lead to fair outcomes, an intuition behind several methods proposed to mitigate the disparate impact of information access systems, i.e. recommender systems and search engines.Comment: Presented at the European Workshop on Algorithmic Fairness (EWAF'23) Winterthur, Switzerland, June 7-9, 202

    Intersectionality in Artificial Intelligence:Framing Concerns and Recommendations for Action

    Get PDF
    While artificial intelligence (AI) is often presented as a neutral tool, growing evidence suggests that it exacerbates gender, racial, and other biases leading to discrimination and marginalization. This study analyzes the emerging agenda on intersectionality in AI. It examines four high‐profile reports dedicated to this topic to interrogate how they frame problems and outline recommendations to address inequalities. These four reports play an important role in putting problematic intersectionality issues on the political agenda of AI, which is typically dominated by questions about AI’s potential social and economic benefits. The documents highlight the systemic nature of problems that operate like a negative feedback loop or vicious cycle with the diversity crisis in the AI workforce leading to the development of biased AI tools when a largely homogenous group of white male developers and tech founders build their own biases into AI systems. Typical examples include gender and racial biases embedded into voice assistants, humanoid robots, and hiring tools. The reports frame the diversity situation in AI as alarming, highlight that previous diversity initiatives have not worked, emphasize urgency, and call for a holistic approach that focuses not just on numbers but rather on culture, power, and opportunities to exert influence. While dedicated reports on intersectionality in AI provide a lot of depth, detail, and nuance on the topic, in the patriarchal system they are in danger of being pigeonholed as issues of relevance mainly for women and minorities rather than part of the core agenda

    Quality-Diversity through AI Feedback

    Full text link
    In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and diversifying a population of candidates. However, the applicability of QD to qualitative domains, like creative writing, has been limited by the difficulty of algorithmically specifying measures of quality and diversity. Interestingly, recent developments in language models (LMs) have enabled guiding search through AI feedback, wherein LMs are prompted in natural language to evaluate qualitative aspects of text. Leveraging this development, we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an evolutionary algorithm applies LMs to both generate variation and evaluate the quality and diversity of candidate text. When assessed on creative writing domains, QDAIF covers more of a specified search space with high-quality samples than do non-QD controls. Further, human evaluation of QDAIF-generated creative texts validates reasonable agreement between AI and human evaluation. Our results thus highlight the potential of AI feedback to guide open-ended search for creative and original solutions, providing a recipe that seemingly generalizes to many domains and modalities. In this way, QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve, which are among the core skills underlying human society's capacity for innovation.Comment: minor additions to supplementary result

    An Intelligent Path for Improving Diversity at Law Firms (Un)Artificially

    Get PDF
    Most law firms are struggling when it comes to diversity and inclusion. There are fewer women in law firms compared to men. The majority of lawyers—81%—are White, despite White people making up only about 65% of the law school population. Lawyers of color remain underrepresented with the historic high being only 28.32%. By comparison, 13.4% of the United States population is Black and 5.9% is Asian. The biases that perpetuate this lack of diversity in law firms begin during the hiring process and extend to associate retainment. For example, an applicant’s resume reveals a lot, including the prestige of the law school they attend (which can create inferences about their socioeconomic status); their class status, depending on extracurricular activities (i.e., playing polo v. interning with a dentist); or their gender, based on their name or other details. Continuing to depend on these biases is detrimental to law firms for various reasons. They lead to the same demographics of hired candidates, to the exclusion of other diverse candidates. Clients also have been demanding their outside counsel to be diverse, or risk losing their business. This paper recommends law firms seeking to address diversity and inclusion issues adopt artificial intelligence (“AI”) in the hiring and retention of lawyers. AI is a term that refers to computers that accomplish tasks that would ordinarily require human intelligence. While AI is being used in other legal tasks successfully to automate routine work and cut costs, there is an added benefit to using AI in hiring and recruiting: firms can remove human biases. This Note begins by first identifying the current lack of diversity in law firms and discussing how bias is a major contributing factor. Second, it will explain how clients are influencing outside counsel to have an increasingly diverse workforce. It will then propose AI as a beneficial solution that can help firms increase diversity and inclusion in both the hiring processes and retention efforts of attorneys while mitigating human biases. Specifically, this paper will discuss the advantages of AI as applied to resume screening, structured interviewing, fair performance management, and equal compensation systems. Finally, it will outline challenges to using AI and how firms can overcome them to use AI fairly and efficiently

    Catalyzing Equity in STEM Teams: Harnessing Generative AI for Inclusion and Diversity

    Full text link
    Collaboration is key to STEM, where multidisciplinary team research can solve complex problems. However, inequality in STEM fields hinders their full potential, due to persistent psychological barriers in underrepresented students' experience. This paper documents teamwork in STEM and explores the transformative potential of computational modeling and generative AI in promoting STEM-team diversity and inclusion. Leveraging generative AI, this paper outlines two primary areas for advancing diversity, equity, and inclusion. First, formalizing collaboration assessment with inclusive analytics can capture fine-grained learner behavior. Second, adaptive, personalized AI systems can support diversity and inclusion in STEM teams. Four policy recommendations highlight AI's capacity: formalized collaborative skill assessment, inclusive analytics, funding for socio-cognitive research, human-AI teaming for inclusion training. Researchers, educators, policymakers can build an equitable STEM ecosystem. This roadmap advances AI-enhanced collaboration, offering a vision for the future of STEM where diverse voices are actively encouraged and heard within collaborative scientific endeavors.Comment: 21 pages, 0 figure, to be published in Policy Insights from Behavioral and Brain Science
    corecore