171 research outputs found

    Deliverable D4.1 Specification of user profiling and contextualisation

    Get PDF
    This deliverable presents a comprehensive research of past work in the field of capturing and interpreting user preferences and context and an overview of relevant digital media-specific techniques, aiming to provide insights and ideas for innovative context-aware user preference learning and to justify the user modelling strategies considered within LinkedTV’s WP4. Based on this research and a study over the specific technical and conceptual requirements of LinkedTV, a prototypical design for profiling and contextualizing user needs in a linked media environment is specified

    Interoperability of Enterprise Software and Applications

    Get PDF

    A framework for context-aware sensor fusion

    Get PDF
    Mención Internacional en el título de doctorSensor fusion is a mature but very active research field, included in the more general discipline of information fusion. It studies how to combine data coming from different sensors, in such way that the resulting information is better in some sense –more complete, accurate or stable– than any of the original sources used individually. Context is defined as everything that constraints or affects the process of solving a problem, without being part of the problem or the solution itself. Over the last years, the scientific community has shown a remarkable interest in the potential of exploiting this context information for building smarter systems that can make a better use of the available information. Traditional sensor fusion systems are based in fixed processing schemes over a predefined set of sensors, where both the employed algorithms and domain are assumed to remain unchanged over time. Nowadays, affordable mobile and embedded systems have a high sensory, computational and communication capabilities, making them a perfect base for building sensor fusion applications. This fact represents an opportunity to explore fusion system that are bigger and more complex, but pose the challenge of offering optimal performance under changing and unexpected circumstances. This thesis proposes a framework supporting the creation of sensor fusion systems with self-adaptive capabilities, where context information plays a crucial role. These two aspects have never been integrated in a common approach for solving the sensor fusion problem before. The proposal includes a preliminary theoretical analysis of both problem aspects, the design of a generic architecture capable for hosting any type of centralized sensor fusion application, and a description of the process to be followed for applying the architecture in order to solve a sensor fusion problem. The experimental section shows how to apply this thesis’ proposal, step by step, for creating a context-aware sensor fusion system with self-adaptive capabilities. This process is illustrated for two different domains: a maritime/coastal surveillance application, and ground vehicle navigation in urban environment. Obtained results demonstrate the viability and validity of the implemented prototypes, as well as the benefit of including context information to enhance sensor fusion processes.La fusión de sensores es un campo de investigación maduro pero no por ello menos activo, que se engloba dentro de la disciplina más amplia de la fusión de información. Su papel consiste en mezclar información de dispositivos sensores para proporcionar un resultado que mejora en algún aspecto –completitud, precisión, estabilidad– al que se puede obtener de las diversas fuentes por separado. Definimos contexto como todo aquello que restringe o afecta el proceso de resolución de un problema, sin ser parte del problema o de su solución. En los últimos años, la comunidad científica ha demostrado un gran interés en el potencial que ofrece el contexto para construir sistemas más inteligentes, capaces de hacer un mejor uso de la información disponible. Por otro lado, el desarrollo de sistemas de fusión de sensores ha respondido tradicionalmente a esquemas de procesado poco flexibles sobre un conjunto prefijado de sensores, donde los algoritmos y el dominio de problema permanecen inalterados con el paso del tiempo. En la actualidad, el abaratamiento de dispositivos móviles y embebidos con gran capacidad sensorial, de comunicación y de procesado plantea nuevas oportunidades. La comunidad científica comienza a explorar la creación de sistemas con mayor grado de complejidad y autonomía, que sean capaces de adaptarse a circunstancias inesperadas y ofrecer un rendimiento óptimo en cada caso. En esta tesis se propone un framework que permite crear sistemas de fusión de sensores con capacidad de auto-adaptación, donde la información contextual juega un papel fundamental. Hasta la fecha, ambos aspectos no han sido integrados en un enfoque conjunto. La propuesta incluye un análisis teórico de ambos aspectos del problema, el diseño de una arquitectura genérica capaz de dar cabida a cualquier aplicación de fusión de sensores centralizada, y la descripción del proceso a seguir para aplicar dicha arquitectura a cualquier problema de fusión de sensores. En la sección experimental se demuestra cómo aplicar nuestra propuesta, paso por paso, para crear un sistema de fusión de sensores adaptable y sensible al contexto. Este proceso de diseño se ilustra sobre dos problemas pertenecientes a dominios tan distintos como la vigilancia costera y la navegación de vehículos en entornos urbanos. El análisis de resultados incluye experimentos concretos que demuestran la validez de los prototipos implementados, así como el beneficio de usar información contextual para mejorar los procesos de fusión de sensores.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Javier Bajo Pérez.- Secretario: Antonio Berlanga de Jesús.- Vocal: Lauro Snidar

    Can humain association norm evaluate latent semantic analysis?

    Get PDF
    This paper presents the comparison of word association norm created by a psycholinguistic experiment to association lists generated by algorithms operating on text corpora. We compare lists generated by Church and Hanks algorithm and lists generated by LSA algorithm. An argument is presented on how those automatically generated lists reflect real semantic relations

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Acta Polytechnica Hungarica 2015

    Get PDF

    Applying Wikipedia to Interactive Information Retrieval

    Get PDF
    There are many opportunities to improve the interactivity of information retrieval systems beyond the ubiquitous search box. One idea is to use knowledge bases—e.g. controlled vocabularies, classification schemes, thesauri and ontologies—to organize, describe and navigate the information space. These resources are popular in libraries and specialist collections, but have proven too expensive and narrow to be applied to everyday webscale search. Wikipedia has the potential to bring structured knowledge into more widespread use. This online, collaboratively generated encyclopaedia is one of the largest and most consulted reference works in existence. It is broader, deeper and more agile than the knowledge bases put forward to assist retrieval in the past. Rendering this resource machine-readable is a challenging task that has captured the interest of many researchers. Many see it as a key step required to break the knowledge acquisition bottleneck that crippled previous efforts. This thesis claims that the roadblock can be sidestepped: Wikipedia can be applied effectively to open-domain information retrieval with minimal natural language processing or information extraction. The key is to focus on gathering and applying human-readable rather than machine-readable knowledge. To demonstrate this claim, the thesis tackles three separate problems: extracting knowledge from Wikipedia; connecting it to textual documents; and applying it to the retrieval process. First, we demonstrate that a large thesaurus-like structure can be obtained directly from Wikipedia, and that accurate measures of semantic relatedness can be efficiently mined from it. Second, we show that Wikipedia provides the necessary features and training data for existing data mining techniques to accurately detect and disambiguate topics when they are mentioned in plain text. Third, we provide two systems and user studies that demonstrate the utility of the Wikipedia-derived knowledge base for interactive information retrieval

    EG-ICE 2021 Workshop on Intelligent Computing in Engineering

    Get PDF
    The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways
    corecore