50 research outputs found

    Multiple graph matching and applications

    Get PDF
    En aplicaciones de reconocimiento de patrones, los grafos con atributos son en gran medida apropiados. Normalmente, los vértices de los grafos representan partes locales de los objetos i las aristas relaciones entre estas partes locales. No obstante, estas ventajas vienen juntas con un severo inconveniente, la distancia entre dos grafos no puede ser calculada en un tiempo polinómico. Considerando estas características especiales el uso de los prototipos de grafos es necesariamente omnipresente. Las aplicaciones de los prototipos de grafos son extensas, siendo las más habituales clustering, clasificación, reconocimiento de objetos, caracterización de objetos i bases de datos de grafos entre otras. A pesar de la diversidad de aplicaciones de los prototipos de grafos, el objetivo del mismo es equivalente en todas ellas, la representación de un conjunto de grafos. Para construir un prototipo de un grafo todos los elementos del conjunto de enteramiento tienen que ser etiquetados comúnmente. Este etiquetado común consiste en identificar que nodos de que grafos representan el mismo tipo de información en el conjunto de entrenamiento. Una vez este etiquetaje común esta hecho, los atributos locales pueden ser combinados i el prototipo construido. Hasta ahora los algoritmos del estado del arte para calcular este etiquetaje común mancan de efectividad o bases teóricas. En esta tesis, describimos formalmente el problema del etiquetaje global i mostramos una taxonomía de los tipos de algoritmos existentes. Además, proponemos seis nuevos algoritmos para calcular soluciones aproximadas al problema del etiquetaje común. La eficiencia de los algoritmos propuestos es evaluada en diversas bases de datos reales i sintéticas. En la mayoría de experimentos realizados los algoritmos propuestos dan mejores resultados que los existentes en el estado del arte.In pattern recognition, the use of graphs is, to a great extend, appropriate and advantageous. Usually, vertices of the graph represent local parts of an object while edges represent relations between these local parts. However, its advantages come together with a sever drawback, the distance between two graph cannot be optimally computed in polynomial time. Taking into account this special characteristic the use of graph prototypes becomes ubiquitous. The applicability of graphs prototypes is extensive, being the most common applications clustering, classification, object characterization and graph databases to name some. However, the objective of a graph prototype is equivalent to all applications, the representation of a set of graph. To synthesize a prototype all elements of the set must be mutually labeled. This mutual labeling consists in identifying which nodes of which graphs represent the same information in the training set. Once this mutual labeling is done the set can be characterized and combined to create a graph prototype. We call this initial labeling a common labeling. Up to now, all state of the art algorithms to compute a common labeling lack on either performance or theoretical basis. In this thesis, we formally describe the common labeling problem and we give a clear taxonomy of the types of algorithms. Six new algorithms that rely on different techniques are described to compute a suboptimal solution to the common labeling problem. The performance of the proposed algorithms is evaluated using an artificial and several real datasets. In addition, the algorithms have been evaluated on several real applications. These applications include graph databases and group-wise image registration. In most of the tests and applications evaluated the presented algorithms have showed a great improvement in comparison to state of the art applications

    Spelling correction in the NLP system 'LOLITA: dictionary organisation and search algorithms

    Get PDF
    This thesis describes the design and implementation of a spelling correction system and associated dictionaries, for the Natural Language Processing System 'LOLITA'. The dictionary storage is based upon a trie (M-ary tree) data-structure. The design of the dictionary is described, and the way in which the data-structure is implemented is also discussed. The spelling correction system makes use of the trie structure in order to limit repetition and "garden path' searching. The spelling correction algorithms used are a variation on the 'reverse minimum edit-distance' technique. These algorithms have been modified in order to place more emphasis on generation in order of likelihood. The system will correct up to two simple errors {i.e. insertion, omission, substitution or transposition of characters) per word. The individual algorithms are presented in turn and their combination into a unified strategy to correct misspellings is demonstrated. The system was implemented in the programming language Haskell; a pure functional, class-based language, with non-strict semantics and polymorphic type-checking. The use of several features of this language, in particular lazy evaluation, and their corresponding advantages over more traditional languages are described. The dictionaries and spelling correcting facilities are in use in the LOLITA system. Issues pertaining to 'real word' error correction, arising from the system's use in an NLP context, axe also discussed

    Hybrid approaches based on computational intelligence and semantic web for distributed situation and context awareness

    Get PDF
    2011 - 2012The research work focuses on Situation Awareness and Context Awareness topics. Specifically, Situation Awareness involves being aware of what is happening in the vicinity to understand how information, events, and one’s own actions will impact goals and objectives, both immediately and in the near future. Thus, Situation Awareness is especially important in application domains where the information flow can be quite high and poor decisions making may lead to serious consequences. On the other hand Context Awareness is considered a process to support user applications to adapt interfaces, tailor the set of application-relevant data, increase the precision of information retrieval, discover services, make the user interaction implicit, or build smart environments. Despite being slightly different, Situation and Context Awareness involve common problems such as: the lack of a support for the acquisition and aggregation of dynamic environmental information from the field (i.e. sensors, cameras, etc.); the lack of formal approaches to knowledge representation (i.e. contexts, concepts, relations, situations, etc.) and processing (reasoning, classification, retrieval, discovery, etc.); the lack of automated and distributed systems, with considerable computing power, to support the reasoning on a huge quantity of knowledge, extracted by sensor data. So, the thesis researches new approaches for distributed Context and Situation Awareness and proposes to apply them in order to achieve some related research objectives such as knowledge representation, semantic reasoning, pattern recognition and information retrieval. The research work starts from the study and analysis of state of art in terms of techniques, technologies, tools and systems to support Context/Situation Awareness. The main aim is to develop a new contribution in this field by integrating techniques deriving from the fields of Semantic Web, Soft Computing and Computational Intelligence. From an architectural point of view, several frameworks are going to be defined according to the multi-agent paradigm. Furthermore, some preliminary experimental results have been obtained in some application domains such as Airport Security, Traffic Management, Smart Grids and Healthcare. Finally, future challenges is going to the following directions: Semantic Modeling of Fuzzy Control, Temporal Issues, Automatically Ontology Elicitation, Extension to other Application Domains and More Experiments. [edited by author]XI n.s

    The Next Generation Space Telescope

    Get PDF
    In Space Science in the Twenty-First Century, the Space Science Board of the National Research Council identified high-resolution-interferometry and high-throughput instruments as the imperative new initiatives for NASA in astronomy for the two decades spanning 1995 to 2015. In the optical range, the study recommended an 8 to 16-meter space telescope, destined to be the successor of the Hubble Space Telescope (HST), and to complement the ground-based 8 to 10-meter-class telescopes presently under construction. It might seem too early to start planning for a successor to HST. In fact, we are late. The lead time for such major missions is typically 25 years, and HST has been in the making even longer with its inception dating back to the early 1960s. The maturity of space technology and a more substantial technological base may lead to a shorter time scale for the development of the Next Generation Space Telescope (NGST). Optimistically, one could therefore anticipate that NGST be flown as early as 2010. On the other hand, the planned lifetime of HST is 15 years. So, even under the best circumstances, there will be a five year gap between the end of HST and the start of NGST. The purpose of this first workshop dedicated to NGST was to survey its scientific potential and technical challenges. The three-day meeting brought together 130 astronomers and engineers from government, industry and universities. Participants explored the technologies needed for building and operating the observatory, reviewed the current status and future prospects for astronomical instrumentation, and discussed the launch and space support capabilities likely to be available in the next decade. To focus discussion, the invited speakers were asked to base their presentations on two nominal concepts, a 10-meter telescope in space in high earth orbit, and a 16-meter telescope on the moon. The workshop closed with a panel discussion focused mainly on the scientific case, siting, and the programmatic approach needed to bring NGST into being. The essential points of this panel discussion have been incorporated into a series of recommendations that represent the conclusions of the workshop. Speakers were asked to provide manuscripts of their presentation. Those received were reproduced here with only minor editorial changes. The few missing papers have been replaced by the presentation viewgraphs. The discussion that follows each speaker's paper was derived from the question and answer sheets, or if unavailable, from the tapes of the meeting. In the latter case, the editors have made every effort to faithfully represent the discussion

    Proceedings of the 2nd Int'l Workshop on Enterprise Modelling and Information Systems Architectures - Concepts and Applications (EMISA'07)

    Get PDF
    The 2nd International Workshop on “Enterprise Modelling and Information Systems Architectures – Concepts and Applications” (EMISA’07) addresses all aspects relevant for enterprise modelling as well as for designing enterprise architectures in general and information systems architectures in particular. It was jointly organized by the GI Special Interest Group on Modelling Business Information Systems (GI-SIG MoBIS) and the GI Special Interest Group on Design Methods for Information Systems (GI-SIG EMISA). -- These proceedings feature a selection of 15 high quality contributions from academia and practice on enterprise architecture models, business processes management, information systems engineering, and other important issues in enterprise modelling and information systems architectures
    corecore