435 research outputs found

    Generic object classification for autonomous robots

    Get PDF
    Un dels principals problemes de la interacció dels robots autònoms és el coneixement de l'escena. El reconeixement és fonamental per a solucionar aquest problema i permetre als robots interactuar en un escenari no controlat. En aquest document presentem una aplicació pràctica de la captura d'objectes, de la normalització i de la classificació de senyals triangulars i circulars. El sistema s'introdueix en el robot Aibo de Sony per a millorar-ne la interacció. La metodologia presentada s'ha comprobat en simulacions i problemes de categorització reals, com ara la classificació de senyals de trànsit, amb resultats molt prometedors.Uno de los principales problemas de la interacción de los robots autónomos es el conocimiento de la escena. El reconocimiento es fundamental para solventar este problema y permitir a los robots interactuar en un escenario no controlado. En este documento, presentamos una aplicación práctica de captura del objeto, normalización y clasificación de señales triangulares y circulares. El sistema es introducido en el robot Aibo de Sony para mejorar el comportamiento de la interacción del robot. La metodología presentada ha sido testeada en simulaciones y problemas de categorización reales, como es la clasificación de señales de tráfico, con resultados muy prometedores.One of the main problems of autonomous robots interaction is the scene knowledge. Recognition is concerned to deal with this problem and to allow robots to interact in uncontrolled environments. In this paper, we present a practical application for object fitting, normalization and classification of triangular and circular signs. The system is introduced in the Aibo robot of Sony to increase the robot interaction behaviour. The presented methodology has been tested in real simulations and categorization problems, as the traffic signs classification, with very promising results.Nota: Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia

    Continuations and Web Servers

    Get PDF

    A Rational Deconstruction of Landin's SECD Machine

    Get PDF
    Landin's SECD machine was the first abstract machine for the lambda-calculus viewed as a programming language. Both theoretically as a model of computation and practically as an idealized implementation, it has set the tone for the subsequent development of abstract machines for functional programming languages. However, and even though variants of the SECD machine have been presented, derived, and invented, the precise rationale for its architecture and modus operandi has remained elusive. In this article, we deconstruct the SECD machine into a lambda-interpreter, i.e., an evaluation function, and we reconstruct lambda-interpreters into a variety of SECD-like machines. The deconstruction and reconstructions are transformational: they are based on equational reasoning and on a combination of simple program transformations--mainly closure conversion, transformation into continuation-passing style, and defunctionalization. The evaluation function underlying the SECD machine provides a precise rationale for its architecture: it is an environment-based eval-apply evaluator with a callee-save strategy for the environment, a data stack of intermediate results, and a control delimiter. Each of the components of the SECD machine (stack, environment, control, and dump) is therefore rationalized and so are its transitions. The deconstruction and reconstruction method also applies to other abstract machines and other evaluation functions. It makes it possible to systematically extract the denotational content of an abstract machine in the form of a compositional evaluation function, and the (small-step) operational content of an evaluation function in the form of an abstract machine

    The History of Computer Science and Technology. 50 more references...

    Get PDF
    A project conceived some time ago, and now finally completed. One hundred references to create a thousand more researchers

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Toward an Expert System for Terrain Analysis.

    Get PDF
    Terrain analysis is the systematic study of image patterns relating to the origin, and composition of distinct terrain units called landforms. It takes into account and provides information about physical site factors which are used by civil engineers for evaluating the suitability of a site for a terrain related engineering application. Terrain analysis is a time consuming labor intensive process and requires a significant degree of expertise. In this dissertation, an expert system paradigm has been adopted, for developing a computational approach to terrain analysis problem solving. A methodology was developed for the representation and management of uncertain terrain knowledge. The vagueness that is inherent in the descriptions of terrain analysis terms was represented using fuzzy models. The Dempster-Shafer theory of evidence was adopted to establish hypotheses about the type of terrain based on observed evidences. A goal directed backward form of reasoning was employed for evaluating the suitability of a site for a terrain related engineering application. The reasoning strategy was formalized in production rules, and the fuzzy models of terrain terms were formalized in frames. Procedural computations were formalized in LISP code. The methodology was implemented in the Terrain Analysis eXpert (TAX) system. TAX was developed by employing the expert system shell KEE (Knowledge Engineering Environment) and the image processing package ELAS (Earth resources Laboratory Application Software). TAX was tested with a real data set consisting of a digitized color infra-red photograph and digital elevation data. The conclusions arrived at by TAX compared favorably to those reached by an expert who analyzed the same site using traditional photointerpretation techniques

    Toward an Expert System for Terrain Analysis.

    Get PDF
    Terrain analysis is the systematic study of image patterns relating to the origin, and composition of distinct terrain units called landforms. It takes into account and provides information about physical site factors which are used by civil engineers for evaluating the suitability of a site for a terrain related engineering application. Terrain analysis is a time consuming labor intensive process and requires a significant degree of expertise. In this dissertation, an expert system paradigm has been adopted, for developing a computational approach to terrain analysis problem solving. A methodology was developed for the representation and management of uncertain terrain knowledge. The vagueness that is inherent in the descriptions of terrain analysis terms was represented using fuzzy models. The Dempster-Shafer theory of evidence was adopted to establish hypotheses about the type of terrain based on observed evidences. A goal directed backward form of reasoning was employed for evaluating the suitability of a site for a terrain related engineering application. The reasoning strategy was formalized in production rules, and the fuzzy models of terrain terms were formalized in frames. Procedural computations were formalized in LISP code. The methodology was implemented in the Terrain Analysis eXpert (TAX) system. TAX was developed by employing the expert system shell KEE (Knowledge Engineering Environment) and the image processing package ELAS (Earth resources Laboratory Application Software). TAX was tested with a real data set consisting of a digitized color infra-red photograph and digital elevation data. The conclusions arrived at by TAX compared favorably to those reached by an expert who analyzed the same site using traditional photointerpretation techniques
    corecore