131,869 research outputs found

    Human-Intelligence/Machine-Intelligence Decision Governance: An Analysis from Ontological Point of View

    Get PDF
    The increasing CPU power and memory capacity of computers, and now computing appliances, in the 21st century has allowed accelerated integration of artificial intelligence (AI) into organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational processes including medical diagnosis, automated stock trading, integrated robotic production systems, telecommunications routing systems, and automobile fuzzy logic controllers. Self-driving automobiles are just the latest extension of AI. This thrust of AI into organizations and everyday life rests on the AI community’s unstated assumption that “…every aspect of human learning and intelligence could be so precisely described that it could be simulated in AI. With the exception of knowledge specific areas …, sixty years later the AI community is not close to coding global human intelligence into AI.” (Cotter, 2015). Thus, in complex mission-environment situations it is therefore still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence/machine-intelligence (HI-MI) decision capacity is required. Most important, there has been no research into the governance and management of human-intelligent/machine-intelligent decision processes. To address this gap, research has been initiated into an HIMI decision governance body of knowledge and discipline. This paper updates progress in one track of that research, specifically into establishing the ontological basis of HI-MI decision governance, which will form the theoretical foundation of a systemic HI-MI decision governance body of knowledge

    Artificial Intelligence and Bank Soundness: Between the Devil and the Deep Blue Sea - Part 2

    Get PDF
    Banks have experienced chronic weaknesses as well as frequent crisis over the years. As bank failures are costly and affect global economies, banks are constantly under intense scrutiny by regulators. This makes banks the most highly regulated industry in the world today. As banks grow into the 21st century framework, banks are in need to embrace Artificial Intelligence (AI) to not only to provide personalized world class service to its large database of customers but most importantly to survive. The chapter provides a taxonomy of bank soundness in the face of AI through the lens of CAMELS where C (Capital), A(Asset), M(Management), E(Earnings), L(Liquidity), S(Sensitivity). The taxonomy partitions challenges from the main strand of CAMELS into distinct categories of AI into 1(C), 4(A), 17(M), 8 (E), 1(L), 2(S) categories that banks and regulatory teams need to consider in evaluating AI use in banks. Although AI offers numerous opportunities to enable banks to operate more efficiently and effectively, at the same time banks also need to give assurance that AI ‘do no harm’ to stakeholders. Posing many unresolved questions, it seems that banks are trapped between the devil and the deep blue sea for now

    Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor

    Get PDF
    In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously

    Has China caught up to the US in AI research? An exploration of mimetic isomorphism as a model for late industrializers

    Full text link
    Artificial Intelligence (AI), a cornerstone of 21st-century technology, has seen remarkable growth in China. In this paper, we examine China's AI development process, demonstrating that it is characterized by rapid learning and differentiation, surpassing the export-oriented growth propelled by Foreign Direct Investment seen in earlier Asian industrializers. Our data indicates that China currently leads the USA in the volume of AI-related research papers. However, when we delve into the quality of these papers based on specific metrics, the USA retains a slight edge. Nevertheless, the pace and scale of China's AI development remain noteworthy. We attribute China's accelerated AI progress to several factors, including global trends favoring open access to algorithms and research papers, contributions from China's broad diaspora and returnees, and relatively lax data protection policies. In the vein of our research, we have developed a novel measure for gauging China's imitation of US research. Our analysis shows that by 2018, the time lag between China and the USA in addressing AI research topics had evaporated. This finding suggests that China has effectively bridged a significant knowledge gap and could potentially be setting out on an independent research trajectory. While this study compares China and the USA exclusively, it's important to note that research collaborations between these two nations have resulted in more highly cited work than those produced by either country independently. This underscores the power of international cooperation in driving scientific progress in AI

    I DREAM, THEREFORE I AM AN ARCHITECT

    Get PDF
    In this review of the exhibition of the student’s research projects in the master’s class Digital Design Studio of Architectural Program within the International University of Sarajevo, mentor and curators Lamila Simisic Pasic and Meliha Teparic are giving an analysis of the settings, aims, and purpose of the show. The exhibition is about an attempt to follow the novelties that the 21st century is bringing into the creation process, such as the involvement of artificial intelligence (AI) within creative fields. Students started with discovering, analyzing, and classifying the results of visual impacts from their travel from home to school. The synthesis came out from a mixture of artificial and real. Then, they merged their physical experiences transformed into visual imagery and digital outputs discovered through the lens of AI into one coherent and intuitive experience. Finally, students used machine learning as a direct collaborator for expanding their imaginations, particularly the diffusion model, which visualizes images out of the text, better known as text-to-image or, its extension, text-to-animation! Using these techniques, students reconstructed their voyages into more visionary landscapes, trying to emphasize, bold, and enlarge dilemmas and concerns of nowadays and refract a multisensory experience to tell the story. The exhibition was held in the Art Gallery of the International University of Sarajevo, Bosnia and Herzegovina, at the end of 2022

    Forcing and response in simulated 20th and 21st century surface energy and precipitation trends

    No full text
    A simple methodology is applied to a transient integration of the Met Office Hadley Centre Global Environmental Model version1 (UKMO-HadGEM1) fully coupled atmosphere-ocean general circulation model in order to separate forcing from climate response in simulated 20th century and future global mean surface energy and precipitation trends. Forcings include any fast responses that are caused by the forcing agent and that are independent of global temperature change. Results reveal that surface radiative forcing is dominated by shortwave forcing over the 20th and 21st centuries, which is strongly negative. However, when fast responses of surface turbulent heat fluxes are separated from climate feedbacks, and included in the forcing, net surface forcing becomes positive. The nonradiative forcings are the result of rapid surface and tropospheric adjustments and impact 20th century, as well as future, evaporation and precipitation trends. A comparison of energy balance changes in eight different climate models finds that all models exhibit a positive surface energy imbalance by the late 20th century. However, there is considerable disagreement in how this imbalance is partitioned between the longwave, shortwave, latent heat and sensible heat fluxes. In particular, all models show reductions in shortwave radiation absorbed at the surface by the late 20th century compared to the pre-industrial control state, but the spread of this reduction leads to differences in the sign of their latent heat flux changes and thus in the sign of their hydrological responses

    Humanity’s Capability of Transcendence through Artificial Intelligence

    Get PDF
    In the 21st century artificial intelligence (AI) technologies such as robots, mind uploading and cyborgs have probed at these questions: Who is in control the machine or human? What is the purpose of autonomous robots to humanity? If a machine can pass the Turing Test then is it appropriate to be in society acting on free will? How would a robot understand how to make someone feel loved? Do their deep expressions of understanding the human condition make them like us? Once a human has a cyborg technology are they then human or machine? How and why would we want to upload our minds? Will we sooner or later give up our bodies to technology? If we store all of our memories and processes into a mechanical being then have we as a person died? Will technology be able to dictate its own future without our intervention? The purpose of this paper is to have a greater understanding of the ethical concerns around AI and its technologies. Specifically how these technologies have changed the concept of what it means to be human through consciousness, emotions, movements and interactions

    Making Sense of AI: Our Algorithmic World, by Anthony Elliott (2022) Polity Press.

    Get PDF
    How can we understand consequences and make sense of an event when we are in its epicentre? Would it be possible to gain a deep understanding of the situation without putting it into a certain perspective or broader context? The book Making sense of AI invites us to resist a natural inclination to make fast inferences based on the proximity and salient experiences and instead, engage in slow thinking and pondering on the evolution of human technology. An effortful exercise, as this reading may turn out to be, is worth, nevertheless, undertaking for 21st century students interested in recognizing future opportunities, coping with challenges, and understanding complex phenomena such as AI
    corecore