8,090 research outputs found

    Do Artificial Intelligence Systems Understand?

    Full text link
    Are intelligent machines really intelligent? Is the underlying philosophical concept of intelligence satisfactory for describing how the present systems work? Is understanding a necessary and sufficient condition for intelligence? If a machine could understand, should we attribute subjectivity to it? This paper addresses the problem of deciding whether the so-called "intelligent machines" are capable of understanding, instead of merely processing signs. It deals with the relationship between syntaxis and semantics. The main thesis concerns the inevitability of semantics for any discussion about the possibility of building conscious machines, condensed into the following two tenets: "If a machine is capable of understanding (in the strong sense), then it must be capable of combining rules and intuitions"; "If semantics cannot be reduced to syntaxis, then a machine cannot understand." Our conclusion states that it is not necessary to attribute understanding to a machine in order to explain its exhibited "intelligent" behavior; a merely syntactic and mechanistic approach to intelligence as a task-solving tool suffices to justify the range of operations that it can display in the current state of technological development

    Towards Robust Artificial Intelligence Systems

    Get PDF
    Adoption of deep neural networks (DNNs) into safety-critical and high-assurance systems has been hindered by the inability of DNNs to handle adversarial and out-of-distribution input. State-of-the-art DNNs misclassify adversarial input and give high confidence output for out-of-distribution input. We attempt to solve this problem by employing two approaches, first, by detecting adversarial input and, second, by developing a confidence metric that can indicate when a DNN system has reached its limits and is not performing to the desired specifications. The effectiveness of our method at detecting adversarial input is demonstrated against the popular DeepFool adversarial image generation method. On a benchmark of 50,000 randomly chosen ImageNet adversarial images generated for CaffeNet and GoogLeNet DNNs, our method can recover the correct label with 95.76% and 97.43% accuracy, respectively. The proposed attribution-based confidence (ABC) metric utilizes attributions used to explain DNN output to characterize whether an output corresponding to an input to the DNN can be trusted. The attribution based approach removes the need to store training or test data or to train an ensemble of models to obtain confidence scores. Hence, the ABC metric can be used when only the trained DNN is available during inference. We test the effectiveness of the ABC metric against both adversarial and out-of-distribution input. We experimental demonstrate that the ABC metric is high for ImageNet input and low for adversarial input generated by FGSM, PGD, DeepFool, CW, and adversarial patch methods. For a DNN trained on MNIST images, ABC metric is high for in-distribution MNIST input and low for out-of-distribution Fashion-MNIST and notMNIST input

    Robotics and Artificial Intelligence in Gastrointestinal Endoscopy: Updated Review of the Literature and State of the Art

    Get PDF
    Abstract Purpose of Review Gastrointestinal endoscopy includes a wide range of procedures that has dramatically evolved over the past decades. Robotic endoscopy and artificial intelligence are expanding the horizons of traditional techniques and will play a key role in clinical practice in the near future. Understanding the main available devices and procedures is a key unmet need. This review aims to assess the current and future applications of the most recently developed endoscopy robots. Recent Findings Even though a few devices have gained approval for clinical application, the majority of robotic and artificial intelligence systems are yet to become an integral part of the current endoscopic instrumentarium. Some of the innovative endoscopic devices and artificial intelligence systems are dedicated to complex procedures such as endoscopic submucosal dissection, whereas others aim to improve diagnostic techniques such as colonoscopy. Summary A review on flexible endoscopic robotics and artificial intelligence systems is presented here, showing the m3ost recently approved and experimental devices and artificial intelligence systems for diagnosis and robotic endoscopy

    Ethical dilemmas of artificial intelligence systems

    Get PDF
    Article translated from Russian. First published in: Агеев А.И. Этические дилеммы систем искусственного интеллекта, в книге Социогуманитарные аспекты цифровых трансформаций искусственного интеллекта под редакцией В.Е. Лепского, А.Н. Райкова. Москва, Когито-Центр. 2022. Глава 3.5

    Evaluating common sense using artificial intelligence systems

    Get PDF
    Evaluating machine common sense using artificial intelligence systems building a model capable of solving a multiple choice quiz involving different kinds of questions such as predicate, entity, pronoun and discourse type questions

    Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    Get PDF
    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems

    Human-centred explanations for artificial intelligence systems

    Get PDF
    As Artificial Intelligence (AI) systems increase in capability, so there are growing concerns over the ways in which the recommendations they provide can affect people's everyday life and decisions. The field of Explainable AI (XAI) aims to address such concerns but there is often a neglect of the human in this process. We present a formal definition of human-centred XAI and illustrate the application of this formalism to the design of a user interface. The user interface supports users in indicating their preferences relevant to a situation and to compare their preferences with those of a computer recommendation system. A user trial is conducted to evaluate the resulting user interface. From the user trial, we believe that users are able to appreciate how their preferences can influence computer recommendations, and how these might contrast with the preferences used by the computer. We provide guidelines of implementing human-centred XAI.</p

    How Artificial Intelligence Systems Could Threaten Democracy

    Get PDF
    U.S. technology giant Microsoft has teamed up with a Chinese military university to develop artificial intelligence systems that could potentially enhance government surveillance and censorship capabilities. Two U.S. senators publicly condemned the partnership, but what the National Defense Technology University of China wants from Microsoft isn’t the only concern

    Combating terrorist financing with artificial intelligence systems

    Get PDF
    Terrorism is a phenomenon that changes very quickly with time. One of the key factors to survey and evaluate its success is its flexibility and the ease with which it mutates into new forms that adapt its actions depending on their goals and their facility to get funding. International terrorism uses international corporations’ structure and management methods adapted to new technologies to produce a new form of decentralized terrorism that is complicated to fight with only the classical tools of legal enforcement agencie, as at present
    corecore