37,554 research outputs found

    A New Constructivist AI: From Manual Methods to Self-Constructive Systems

    Get PDF
    The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift

    Simplifying the construction of domain-specific automatic programming systems: The NASA automated software development workstation project

    Get PDF
    An overview is presented of the Automated Software Development Workstation Project, an effort to explore knowledge-based approaches to increasing software productivity. The project focuses on applying the concept of domain specific automatic programming systems (D-SAPSs) to application domains at NASA's Johnson Space Center. A version of a D-SAPS developed in Phase 1 of the project for the domain of space station momentum management is described. How problems encountered during its implementation led researchers to concentrate on simplifying the process of building and extending such systems is discussed. Researchers propose to do this by attacking three observed bottlenecks in the D-SAPS development process through the increased automation of the acquisition of programming knowledge and the use of an object oriented development methodology at all stages of the program design. How these ideas are being implemented in the Bauhaus, a prototype workstation for D-SAPS development is discussed

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    A Product Line Systems Engineering Process for Variability Identification and Reduction

    Full text link
    Software Product Line Engineering has attracted attention in the last two decades due to its promising capabilities to reduce costs and time to market through reuse of requirements and components. In practice, developing system level product lines in a large-scale company is not an easy task as there may be thousands of variants and multiple disciplines involved. The manual reuse of legacy system models at domain engineering to build reusable system libraries and configurations of variants to derive target products can be infeasible. To tackle this challenge, a Product Line Systems Engineering process is proposed. Specifically, the process extends research in the System Orthogonal Variability Model to support hierarchical variability modeling with formal definitions; utilizes Systems Engineering concepts and legacy system models to build the hierarchy for the variability model and to identify essential relations between variants; and finally, analyzes the identified relations to reduce the number of variation points. The process, which is automated by computational algorithms, is demonstrated through an illustrative example on generalized Rolls-Royce aircraft engine control systems. To evaluate the effectiveness of the process in the reduction of variation points, it is further applied to case studies in different engineering domains at different levels of complexity. Subject to system model availability, reduction of 14% to 40% in the number of variation points are demonstrated in the case studies.Comment: 12 pages, 6 figures, 2 tables; submitted to the IEEE Systems Journal on 3rd June 201

    A Framework for Evaluating Model-Driven Self-adaptive Software Systems

    Get PDF
    In the last few years, Model Driven Development (MDD), Component-based Software Development (CBSD), and context-oriented software have become interesting alternatives for the design and construction of self-adaptive software systems. In general, the ultimate goal of these technologies is to be able to reduce development costs and effort, while improving the modularity, flexibility, adaptability, and reliability of software systems. An analysis of these technologies shows them all to include the principle of the separation of concerns, and their further integration is a key factor to obtaining high-quality and self-adaptable software systems. Each technology identifies different concerns and deals with them separately in order to specify the design of the self-adaptive applications, and, at the same time, support software with adaptability and context-awareness. This research studies the development methodologies that employ the principles of model-driven development in building self-adaptive software systems. To this aim, this article proposes an evaluation framework for analysing and evaluating the features of model-driven approaches and their ability to support software with self-adaptability and dependability in highly dynamic contextual environment. Such evaluation framework can facilitate the software developers on selecting a development methodology that suits their software requirements and reduces the development effort of building self-adaptive software systems. This study highlights the major drawbacks of the propped model-driven approaches in the related works, and emphasise on considering the volatile aspects of self-adaptive software in the analysis, design and implementation phases of the development methodologies. In addition, we argue that the development methodologies should leave the selection of modelling languages and modelling tools to the software developers.Comment: model-driven architecture, COP, AOP, component composition, self-adaptive application, context oriented software developmen
    • 

    corecore