149,317 research outputs found

    Cyber risk at the edge: Current and future trends on cyber risk analytics and artificial intelligence in the industrial internet of things and industry 4.0 supply chains

    Get PDF
    Digital technologies have changed the way supply chain operations are structured. In this article, we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks. A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0, with a specific focus on the mitigation of cyber risks. An analytical framework is presented, based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies. This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning (AI/ML) and real-time intelligence for predictive cyber risk analytics. The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge. This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when AI/ML technologies are migrated to the periphery of IoT networks

    Fairness in AI and Its Long-Term Implications on Society

    Full text link
    Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society. However, AI systems have also been shown to harm parts of the population due to biased predictions. We take a closer look at AI fairness and analyse how lack of AI fairness can lead to deepening of biases over time and act as a social stressor. If the issues persist, it could have undesirable long-term implications on society, reinforced by interactions with other risks. We examine current strategies for improving AI fairness, assess their limitations in terms of real-world deployment, and explore potential paths forward to ensure we reap AI's benefits without harming significant parts of the society.Comment: Presented at the 3rd Annual Stanford Existential Risks Conference, 202

    The Ethics of AI-Generated Maps: A Study of DALLE 2 and Implications for Cartography

    Full text link
    The rapid advancement of artificial intelligence (AI) such as the emergence of large language models including ChatGPT and DALLE 2 has brought both opportunities for improving productivity and raised ethical concerns. This paper investigates the ethics of using artificial intelligence (AI) in cartography, with a particular focus on the generation of maps using DALLE 2. To accomplish this, we first create an open-sourced dataset that includes synthetic (AI-generated) and real-world (human-designed) maps at multiple scales with a variety settings. We subsequently examine four potential ethical concerns that may arise from the characteristics of DALLE 2 generated maps, namely inaccuracies, misleading information, unanticipated features, and reproducibility. We then develop a deep learning-based ethical examination system that identifies those AI-generated maps. Our research emphasizes the importance of ethical considerations in the development and use of AI techniques in cartography, contributing to the growing body of work on trustworthy maps. We aim to raise public awareness of the potential risks associated with AI-generated maps and support the development of ethical guidelines for their future use.Comment: 8 pages, 2 figures, GIScience 2023 conferenc

    Responsible Design Patterns for Machine Learning Pipelines

    Full text link
    Integrating ethical practices into the AI development process for artificial intelligence (AI) is essential to ensure safe, fair, and responsible operation. AI ethics involves applying ethical principles to the entire life cycle of AI systems. This is essential to mitigate potential risks and harms associated with AI, such as algorithm biases. To achieve this goal, responsible design patterns (RDPs) are critical for Machine Learning (ML) pipelines to guarantee ethical and fair outcomes. In this paper, we propose a comprehensive framework incorporating RDPs into ML pipelines to mitigate risks and ensure the ethical development of AI systems. Our framework comprises new responsible AI design patterns for ML pipelines identified through a survey of AI ethics and data management experts and validated through real-world scenarios with expert feedback. The framework guides AI developers, data scientists, and policy-makers to implement ethical practices in AI development and deploy responsible AI systems in production.Comment: 20 pages, 4 figures, 5 table

    Automated Decision Making and Machine Learning: Regulatory Alternatives for Autonomous Settings

    Get PDF
    Given growing investment capital in research and development, accompanied by extensive literature on the subject by researchers in nearly every domain from civil engineering to legal studies, automated decision-support systems (ADM) are likely to see a place in the foreseeable future. Artificial intelligence (AI), as an automated system, can be defined as a broad range of computerized tasks designed to replicate human neural networks, store and organize large quantities of information, detect patterns, and make predictions with increasing accuracy and reliability. By itself, artificial intelligence is not quite science-fiction tropes (i.e. an uncontrollable existential threat to humanity) yet not without real-world implications. The fears that come from machines operating autonomously are justified in many ways given their ability to worsen existing inequalities, collapse financial markets (the 2010 “flash crash”), erode trust in societal institutions, and pose threats to physical safety. Still, even when applied in complex social environments, the political and legal mechanisms for dealing with the risks and harms that are likely to arise from artificial intelligence are not obsolete. As this paper seeks to demonstrate, other Information Age technologies have introduced comparable issues. However, the dominant market-based approach to regulation is insufficient in dealing with issues related to artificial intelligence because of the unique risks they pose to civil liberties and human rights. Assuming the government has a role in protecting values and ensuring societal well-being, in this paper, I work toward an alternative regulatory approach that focuses on regulating the commercial side of automated decision-making and machine learning techniques

    Inteligência Artificial e os Riscos Existenciais Reais: Uma Análise das Limitações Humanas de Controle

    Get PDF
    Based on the hypothesis that artificial intelligence would not represent the end of human supremacy, since, in essence, AI would only simulate and increase aspects of human intelligence in non-biological artifacts, this paper questions the real risk to be faced. Beyond the clash between technophobes and technophiles, what is argued, then, is that the possible malfunctions of an artificial intelligence – resulting from information overload, from a wrong programming or from a randomness of the system - could signal the real existential risks, especially when we consider that the biological brain, in the wake of the automation bias, tends to assume uncritically what is set by systems anchored in artificial intelligence. Moreover, the argument defended here is that failures undetectable by the probable limitation of human control regarding the increased complexity of the functioning of AI systems represent the main real existential risk. Keywords: Artificial intelligence, existential risk, superintelligences, human control.A partir da hipótese de que a inteligência artificial como tal não representaria o fim da supremacia humana, uma vez que, na essência, a IA somente simularia e aumentaria aspectos da inteligência humana em artefatos não biológicos, o presente artigo questiona sobre o risco real a ser enfrentado. Para além do embate entre tecnofóbicos e tecnofílicos, o que se defende, então, é que as possíveis falhas de funcionamento de uma inteligência artificial – decorrentes de sobrecarga de informação, de uma programação equivocada ou de uma aleatoriedade do sistema – poderiam sinalizar os verdadeiros riscos existenciais, sobretudo quando se considera que o cérebro biológico, na esteira do viés da automação, tende a assumir de maneira acrítica aquilo que é posto por sistemas ancorados em inteligência artificial. Além disso, o argumento aqui defendido é que falhas não detectáveis pela provável limitação de controle humano quanto ao aumento de complexidade do funcionamento de sistemas de IA representam o principal risco existencial real. Palavras-chave: Inteligência artificial, risco existencial, superinteligências, controle humano.A partir da hipótese de que a inteligência artificial como tal não representaria o fim da supremacia humana, uma vez que, na essência, a IA somente simularia e aumentaria aspectos da inteligência humana em artefatos não biológicos, o presente artigo questiona sobre o risco real a ser enfrentado. Para além do embate entre tecnofóbicos e tecnofílicos, o que se defende, então, é que as possíveis falhas de funcionamento de uma inteligência artificial – decorrentes de sobrecarga de informação, de uma programação equivocada ou de uma aleatoriedade do sistema – poderiam sinalizar os verdadeiros riscos existenciais, sobretudo quando se considera que o cérebro biológico, na esteira do viés da automação, tende a assumir de maneira acrítica aquilo que é posto por sistemas ancorados em inteligência artificial. Além disso, o argumento aqui defendido é que falhas não detectáveis pela provável limitação de controle humano quanto ao aumento de complexidade do funcionamento de sistemas de IA representam o principal risco existencial real. Palavras-chave: Inteligência artificial, risco existencial, superinteligências, controle humano

    PRACTICA. A Virtual Reality Platform for Specialized Training Oriented to Improve the Productivity

    Get PDF
    With the proliferation of Virtual reality headset that are emerging into a consumer-oriented market for video games, it will open new possibilities for exploiting the virtual reality (VR). Therefore, the PRACTICA project is defined as a new service aimed to offering a system for creating courses based on a VR simulator for specialized training companies that allows offering to the students an experience close to reality. The general problem of creating these virtual courses derives from the need to have programmers that can generate them. Therefore, the PRACTICA project allows the creation of courses without the need to program source code. In addition, elements of virtual interaction have been incorporated that cannot be used in a real environment due to risks for the staff, such as the introduction of fictional characters or obstacles that interact with the environment. So to do this, artificial intelligence techniques have been incorporated so these elements can interact with the user, as it may be, the movement of these fictional characters on stage with a certain behavior. This feature offers the opportunity to create situations and scenarios that are even more complex and realistic.This project aims to create a service to bring virtual reality technologies closer and artificial intelligence for non-technological companies, so that they can generate (or acquire) their own content and give it the desired shape for their purposes

    Man and Machine: Questions of Risk, Trust and Accountability in Today's AI Technology

    Full text link
    Artificial Intelligence began as a field probing some of the most fundamental questions of science - the nature of intelligence and the design of intelligent artifacts. But it has grown into a discipline that is deeply entwined with commerce and society. Today's AI technology, such as expert systems and intelligent assistants, pose some difficult questions of risk, trust and accountability. In this paper, we present these concerns, examining them in the context of historical developments that have shaped the nature and direction of AI research. We also suggest the exploration and further development of two paradigms, human intelligence-machine cooperation, and a sociological view of intelligence, which might help address some of these concerns.Comment: Preprin

    Ways of Applying Artificial Intelligence in Software Engineering

    Full text link
    As Artificial Intelligence (AI) techniques have become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of AI application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use
    corecore