989,436 research outputs found

    Unifying an Introduction to Artificial Intelligence Course through Machine Learning Laboratory Experiences

    Full text link
    This paper presents work on a collaborative project funded by the National Science Foundation that incorporates machine learning as a unifying theme to teach fundamental concepts typically covered in the introductory Artificial Intelligence courses. The project involves the development of an adaptable framework for the presentation of core AI topics. This is accomplished through the development, implementation, and testing of a suite of adaptable, hands-on laboratory projects that can be closely integrated into the AI course. Through the design and implementation of learning systems that enhance commonly-deployed applications, our model acknowledges that intelligent systems are best taught through their application to challenging problems. The goals of the project are to (1) enhance the student learning experience in the AI course, (2) increase student interest and motivation to learn AI by providing a framework for the presentation of the major AI topics that emphasizes the strong connection between AI and computer science and engineering, and (3) highlight the bridge that machine learning provides between AI technology and modern software engineering

    Making metaethics work for AI: realism and anti-realism

    Get PDF
    Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place

    Sensemaking Practices in the Everyday Work of AI/ML Software Engineering

    Get PDF
    This paper considers sensemaking as it relates to everyday software engineering (SE) work practices and draws on a multi-year ethnographic study of SE projects at a large, global technology company building digital services infused with artificial intelligence (AI) and machine learning (ML) capabilities. Our findings highlight the breadth of sensemaking practices in AI/ML projects, noting developers' efforts to make sense of AI/ML environments (e.g., algorithms/methods and libraries), of AI/ML model ecosystems (e.g., pre-trained models and "upstream"models), and of business-AI relations (e.g., how the AI/ML service relates to the domain context and business problem at hand). This paper builds on recent scholarship drawing attention to the integral role of sensemaking in everyday SE practices by empirically investigating how and in what ways AI/ML projects present software teams with emergent sensemaking requirements and opportunities

    A knowledge based software engineering environment testbed

    Get PDF
    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processin

    A brief network analysis of Artificial Intelligence publication

    Full text link
    In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940. We collected and mined through the IEEE publish data base to analysis the geological and chronological variance of the activeness of research in AI. The connections between different institutes are showed. The result shows that the leading community of AI research are mainly in the USA, China, the Europe and Japan. The key institutes, authors and the research hotspots are revealed. It is found that the research institutes in the fields like Data Mining, Computer Vision, Pattern Recognition and some other fields of Machine Learning are quite consistent, implying a strong interaction between the community of each field. It is also showed that the research of Electronic Engineering and Industrial or Commercial applications are very active in California. Japan is also publishing a lot of papers in robotics. Due to the limitation of data source, the result might be overly influenced by the number of published articles, which is to our best improved by applying network keynode analysis on the research community instead of merely count the number of publish.Comment: 18 pages, 7 figure

    Increasing the Numeric Expressiveness of the Planning Domain Definition Language

    Get PDF
    The technology of artificial intelligence (AI) planning is being adopted across many different disciplines. This has resulted in the wider use of the Planning Domain Definition Language (PDDL), where it is being used to model planning problems of different natures. One such area where AI planning is particularly attractive is engineering, where the optimisation problems are mathematically rich. The example used throughout this paper is the optimisation (minimisation) of machine tool measurement uncertainty. This planning problem highlights the limits of PDDL's numerical expressiveness in the absence of the square root function. A workaround method using the Babylonian algorithm is then evaluated before the extension of PDDL to include more mathematics functions is discussed

    Artificial intelligence and the space station software support environment

    Get PDF
    In a software system the size of the Space Station Software Support Environment (SSE), no one software development or implementation methodology is presently powerful enough to provide safe, reliable, maintainable, cost effective real time or near real time software. In an environment that must survive one of the most harsh and long life times, software must be produced that will perform as predicted, from the first time it is executed to the last. Many of the software challenges that will be faced will require strategies borrowed from Artificial Intelligence (AI). AI is the only development area mentioned as an example of a legitimate reason for a waiver from the overall requirement to use the Ada programming language for software development. The limits are defined of the applicability of the Ada language Ada Programming Support Environment (of which the SSE is a special case), and software engineering to AI solutions by describing a scenario that involves many facets of AI methodologies

    Do Chatbots Dream of Androids? Prospects for the Technological Development of Artificial Intelligence and Robotics

    Get PDF
    The article discusses the main trends in the development of artificial intelligence systems and robotics (AI&R). The main question that is considered in this context is whether artificial systems are going to become more and more anthropomorphic, both intellectually and physically. In the current article, the author analyzes the current state and prospects of technological development of artificial intelligence and robotics, and also determines the main aspects of the impact of these technologies on society and economy, indicating the geopolitical strategic nature of this influence. The author considers various approaches to the definition of artificial intelligence and robotics, focusing on the subject-oriented and functional ones. It also compares AI&R abilities and human abilities in areas such as categorization, pattern recognition, planning and decision making, etc. Based on this comparison, we investigate in which areas AI&R’s performance is inferior to a human, and in which cases it is superior to one. The modern achievements in the field of robotics and artificial intelligence create the necessary basis for further discussion of the applicability of goal setting in engineering, in the form of a Turing test. It is shown that development of AI&R is associated with certain contradictions that impede the application of Turing’s methodology in its usual format. The basic contradictions in the development of AI&R technologies imply that there is to be a transition to a post-Turing methodology for assessing engineering implementations of artificial intelligence and robotics. In such implementations, on the one hand, the ‘Turing wall’ is removed, and on the other hand, artificial intelligence gets its physical implementation
    corecore