22,465 research outputs found

    Making intelligent systems team players: Case studies and design issues. Volume 1: Human-computer interaction design

    Get PDF
    Initial results are reported from a multi-year, interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. The objective is to achieve more effective human-computer interaction (HCI) for systems with real time fault management capabilities. Intelligent fault management systems within the NASA were evaluated for insight into the design of systems with complex HCI. Preliminary results include: (1) a description of real time fault management in aerospace domains; (2) recommendations and examples for improving intelligent systems design and user interface design; (3) identification of issues requiring further research; and (4) recommendations for a development methodology integrating HCI design into intelligent system design

    Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    Full text link
    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI). arXiv admin note: substantial text overlap with arXiv:1610.0770

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Framework for software architecture visualization assessment.

    Get PDF
    In order to assess software architecture visualisation strategies, we qualitatively characterize then construct an assessment framework with 7 key areas and 31 features. The framework is used for evaluation and comparison of various strategies from multiple stakeholder perspectives. Six existing software architecture visualisation tools and a seventh research tool were evaluated. All tools exhibited shortcomings when evaluated in the framework

    Induction, complexity, and economic methodology

    Get PDF
    This paper focuses on induction, because the supposed weaknesses of that process are the main reason for favouring falsificationism, which plays an important part in scientific methodology generally; the paper is part of a wider study of economic methodology. The standard objections to, and paradoxes of, induction are reviewed, and this leads to the conclusion that the supposed ‘problem’ or ‘riddle’ of induction is a false one. It is an artefact of two assumptions: that the classic two-valued logic (CL) is appropriate for the contexts in which induction is relevant; and that it is the touchstone of rational thought. The status accorded to CL is the result of historical and cultural factors. The material we need to reason about falls into four distinct domains; these are explored in turn, while progressively relaxing the restrictions that are essential to the valid application of CL. The restrictions include the requirement for a pre-existing, independently-guaranteed classification, into which we can fit all new cases with certainty; and non-ambiguous relationships between antecedents and consequents. Natural kinds, determined by the existence of complex entities whose characteristics cannot be unbundled and altered in a piecemeal, arbitrary fashion, play an important part in the review; so also does fuzzy logic (FL). These are used to resolve two famous paradoxes about induction (the grue and raven paradoxes); and the case for believing that conventional logic is a subset of fuzzy logic is outlined. The latter disposes of all questions of justifying induction deductively. The concept of problem structure is used as the basis for a structured concept of rationality that is appropriate to all four of the domains mentioned above. The rehabilitation of induction supports an alternative definition of science: that it is the business of developing networks of contrastive, constitutive explanations of reproducible, inter-subjective (‘objective’) data. Social and psychological obstacles ensure the progress of science is slow and convoluted; however, the relativist arguments against such a project are rejected.induction; economics; methodology; complexity

    Technology assessment of advanced automation for space missions

    Get PDF
    Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
    • …
    corecore