4,445 research outputs found

    Kant's cognitive architecture

    Get PDF
    Imagine a machine, equipped with sensors, receiving a stream of sensory information. It must, somehow, make sense of this stream of sensory data. But what, exactly, does this involve? We have an intuitive understanding of what is involved in “making sense” of sensory data – but can we specify precisely what is involved? Can this intuitive notion be formalized? In this thesis, we make three contributions. First, we provide a precise formalization of what it means to “make sense” of a sensory sequence. According to our definition, making sense means constructing a symbolic causal theory that explains the sensory sequence and satisfies a set of unity conditions that were inspired by Kant’s discussion in the first half of the Critique of Pure Reason. According to our interpretation, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy our requirements for making sense of a sensory sequence. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the Kantian unity constraints. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute missing sensory readings. In fact, it is able to do all three tasks simultaneously. The engine is implemented in Answer Set Programming (ASP) and induces theories expressed in an extension of Datalog that includes causal rules and constraints. We test the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction IQ tests. In each domain, we test our engine’s ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. These results are significant because neural nets typically struggle to solve the binding problem (where information from different modalities must somehow be combined together into different aspects of one unified object) and fail to solve occlusion tasks (in which objects are sometimes visible and sometimes obscured from view). We note in particular that in the sequence induction IQ tasks, our system achieves human-level performance. This is notable because the Apperception Engine was not designed to solve these IQ tasks; it is not a bespoke hand-engineered solution to this particular domain. – Rather, it is a general purpose system that attempts to make sense of any sensory sequence, that just happens to be able to solve these IQ tasks “out of the box”. Our third contribution is a major extension of the engine to handle noisy and ambiguous data. While the initial implementation assumes the sensory input has already been preprocessed into ground atoms of first-order logic, our extension makes sense of raw unprocessed input – a sequence of pixel images from a video camera, for example. The resulting system is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience.Open Acces

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Technology assessment of advanced automation for space missions

    Get PDF
    Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology

    A two-level structure for advanced space power system automation

    Get PDF
    The tasks to be carried out during the three-year project period are: (1) performing extensive simulation using existing mathematical models to build a specific knowledge base of the operating characteristics of space power systems; (2) carrying out the necessary basic research on hierarchical control structures, real-time quantitative algorithms, and decision-theoretic procedures; (3) developing a two-level automation scheme for fault detection and diagnosis, maintenance and restoration scheduling, and load management; and (4) testing and demonstration. The outlines of the proposed system structure that served as a master plan for this project, work accomplished, concluding remarks, and ideas for future work are also addressed

    Report on the Second Workshop on Distributed AI

    Get PDF
    On June 24, 1981 twenty-five participants from organizations around the country gathered in MIT's Endicott House for the Second Annual Workshop on Distributed AI. The three-day workshop was designed as an informal meeting, centered mainly around brief research reports presented by each group, along with an invited talk. In keeping with the spirit of the meeting, this report was prepared as a distributed document, with each speaker contributing a summary of his remarks.MIT Artificial Intelligence Laborator

    Load Balancing in Wireless Mobile Ad Hoc Networks

    Get PDF
    Ad hoc networks consist of a set of homogeneous nodes (computers or embedded devices) that move in an independent fashion and communicate with the other node in the topology over a wireless channel. Such networks are logically realized as a set of clusters by grouping together nodes which are in close proximity with one another or through another wireless node. Clusters are formed by clubbing together nodes along the wireless links. Cluster Heads are the nodes which communicate with the other nodes that it can cover under its communication range. Cluster Heads form a virtual backbone and may be used to route packets for nodes in their cluster. Nodes, being in an Ad Hoc network, are presumed to have a non-deterministic mobility pattern. Different heuristics employ different policies to elect Cluster Heads. Many of these policies are biased in favor of some nodes. As a result, these nodes shoulder greater responsibility which may deplete their energy faster due higher number of communication made, causing them to drop out of the network. Therefore, there is a need for load-balancing among Cluster Heads to allow all nodes the opportunity to serve as a Cluster Head. I propose a few enhancements to existing algorithms to remove the unbalanced distribution of nodes under the Cluster Heads and increase the active life of a node in a network

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Report on the Workshop on Distributed AI

    Get PDF
    On June 9-11, 22 people gathered at Endicott House for the first workshop on the newly emerging topic of Distributed AI. They came with a wide range of views on the topic, and indeed a wide range of views of what precisely the topic was. In keeping with the spirit of the workshop, this report describing it was prepared in a distributed fashion. Each of the speakers contributed a summary of his comments. Sessions during the workshop included both descriptions of work done or in progress, and group discussions focused on a range of topics. The report reflects the organization, with nine short articles describing research efforts, and four summarizing the informal comments used as the foci for the group discussions.MIT Artificial Intelligence Laborator
    corecore