53 research outputs found

    Improving personalized elderly care: an approach using cognitive agents to better assist elderly people

    Get PDF
    Tesis por compendio de publicaciones[ES]El envejecimiento de la población a nivel global es una constante cada vez más presente en el día a día y las consecuencias derivadas de este problema son cada vez más impactantes para el correcto funcionamiento y estructuración de la sociedad. En este contexto, hablamos de consecuencias a nivel de crecimiento económico, estilos de vida (y jubilación), relaciones familiares, recursos disponibles por el gobierno a la franja etaria más anciana e inevitablemente la prevalencia de enfermedades crónicas. Es ante esta realidad que surge la necesidad de desarrollo y promoción de estrategias eficaces en el acompañamiento, prevención y estímulo al envejecimiento activo y saludable de la población para garantizar que las personas ancianas continúen teniendo un papel relevante en la sociedad en lugar de someterse al aislamiento y fácil deterioro de las capacidades físicas, cognitivas, emocionales y sociales. De esta forma, tiene todo el sentido aprovechar todos los desarrollos tecnológicos verificados en los últimos años, principalmente en lo que se refiere a avances en las áreas de dispositivos móviles, inteligencia artificial y sistemas de monitoreo y crear soluciones capaces de brindar apoyo diariamente al recopilar datos e indicadores del estado de salud y, en respuesta, proporcionar diversas acciones personalizadas que motiven la adopción de mejores hábitos de salud y medios para lograr este envejecimiento activo y saludable. El desafío consiste en motivar a esta población a conciliar su día a día con el interés y la voluntad de utilizar aplicaciones y sistemas que brinden este apoyo personalizado. Algunas de las abordajes recientemente explorados en la literatura con este objetivo y que han alcanzado resultados prometedores se basan en la utilización de técnicas de gamificación e incentivo al cumplimiento de desafíos a nivel de salud (como si la persona estuviera jugando un juego) y la utilización de interacciones personalizadas con objetos (ya sean físicos como robots o virtuales como avatares) capaces de brindar feedback más personal, creando así una conexión más cercana entre ambas entidades. El trabajo aquí presentado combina estas ideas y resulta en un enfoque inteligente para la promoción del bienestar de la población anciana a través de un sistema de cuidados de salud personalizado. Este sistema incorpora diversas técnicas de gamificación para la promoción de mejores hábitos y comportamientos, y la utilización de un asistente virtual cognitivo capaz de entender las necesidades e intereses del usuario para posibilitar un feedback e interacción personalizados con el fin de ayudar y motivar al cumplimiento de los diferentes desafíos y objetivos que se identifiquen. El enfoque propuesto fue validado a través de un estudio con 12 usuarios ancianos y se lograron resultados significativos en términos de usabilidad, aceptación y efectos de salud. Específicamente, los resultados obtenidos permiten respaldar la importancia y el efecto positivo de combinar técnicas de gamificación e interacción con un asistente virtual cognitivo que traduzca el progreso del estado de salud del usuario, ya que se lograron mejoras significativas en los resultados de salud después de la intervención. Además, los resultados de usabilidad obtenidos mediante la cumplimentación de un cuestionario de usabilidad confirmaron la buena adhesión a el enfoque presentado. Estos resultados validan la hipótesis de la investigación estudiada en el desarrollo de esta disertación

    Artificial Intelligence in the development of modern infrastructures

    Get PDF
    Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform tasks as human beings. Most of the examples of AI you hear about today - from computers playing chess to autonomous driving cars - rely heavily on deep learning and natural language processing

    Prediction of friction degradation in highways with linear mixed models

    Get PDF
    The development of a linear mixed model to describe the degradation of friction on flexible road pavements to be included in pavement management systems is the aim of this study. It also aims at showing that, at the network level, factors such as temperature, rainfall, hypsometry, type of layer, and geometric alignment features may influence the degradation of friction throughout time. A dataset from six districts of Portugal with 7204 sections was made available by the Ascendi Concession highway network. Linear mixed models with random effects in the intercept were developed for the two-level and three-level datasets involving time, section and district. While the three-level models are region-specific, the two-level models offer the possibility to be adopted to other areas. For both levels, two approaches were made: One integrating into the model only the variables inherent to traffic and climate conditions and the other including also the factors intrinsic to the highway characteristics. The prediction accuracy of the model was improved when the variables hypsometry, geometrical features, and type of layer were considered. Therefore, accurate predictions for friction evolution throughout time are available to assist the network manager to optimize the overall level of road safety.This research was funded by FCT—Fundação para a Ciência e Tecnologia (Foundation for Science and Technology), Grants No. UIDB/04029/2020 and UIDB/00319/2020

    A step towards a reinforcement learning de novo genome assembler

    Full text link
    The use of reinforcement learning has proven to be very promising for solving complex activities without human supervision during their learning process. However, their successful applications are predominantly focused on fictional and entertainment problems - such as games. Based on the above, this work aims to shed light on the application of reinforcement learning to solve this relevant real-world problem, the genome assembly. By expanding the only approach found in the literature that addresses this problem, we carefully explored the aspects of intelligent agent learning, performed by the Q-learning algorithm, to understand its suitability to be applied in scenarios whose characteristics are more similar to those faced by real genome projects. The improvements proposed here include changing the previously proposed reward system and including state space exploration optimization strategies based on dynamic pruning and mutual collaboration with evolutionary computing. These investigations were tried on 23 new environments with larger inputs than those used previously. All these environments are freely available on the internet for the evolution of this research by the scientific community. The results suggest consistent performance progress using the proposed improvements, however, they also demonstrate the limitations of them, especially related to the high dimensionality of state and action spaces. We also present, later, the paths that can be traced to tackle genome assembly efficiently in real scenarios considering recent, successfully reinforcement learning applications - including deep reinforcement learning - from other domains dealing with high-dimensional inputs

    Políticas de Copyright de Publicações Científicas em Repositórios Institucionais: O Caso do INESC TEC

    Get PDF
    A progressiva transformação das práticas científicas, impulsionada pelo desenvolvimento das novas Tecnologias de Informação e Comunicação (TIC), têm possibilitado aumentar o acesso à informação, caminhando gradualmente para uma abertura do ciclo de pesquisa. Isto permitirá resolver a longo prazo uma adversidade que se tem colocado aos investigadores, que passa pela existência de barreiras que limitam as condições de acesso, sejam estas geográficas ou financeiras. Apesar da produção científica ser dominada, maioritariamente, por grandes editoras comerciais, estando sujeita às regras por estas impostas, o Movimento do Acesso Aberto cuja primeira declaração pública, a Declaração de Budapeste (BOAI), é de 2002, vem propor alterações significativas que beneficiam os autores e os leitores. Este Movimento vem a ganhar importância em Portugal desde 2003, com a constituição do primeiro repositório institucional a nível nacional. Os repositórios institucionais surgiram como uma ferramenta de divulgação da produção científica de uma instituição, com o intuito de permitir abrir aos resultados da investigação, quer antes da publicação e do próprio processo de arbitragem (preprint), quer depois (postprint), e, consequentemente, aumentar a visibilidade do trabalho desenvolvido por um investigador e a respetiva instituição. O estudo apresentado, que passou por uma análise das políticas de copyright das publicações científicas mais relevantes do INESC TEC, permitiu não só perceber que as editoras adotam cada vez mais políticas que possibilitam o auto-arquivo das publicações em repositórios institucionais, como também que existe todo um trabalho de sensibilização a percorrer, não só para os investigadores, como para a instituição e toda a sociedade. A produção de um conjunto de recomendações, que passam pela implementação de uma política institucional que incentive o auto-arquivo das publicações desenvolvidas no âmbito institucional no repositório, serve como mote para uma maior valorização da produção científica do INESC TEC.The progressive transformation of scientific practices, driven by the development of new Information and Communication Technologies (ICT), which made it possible to increase access to information, gradually moving towards an opening of the research cycle. This opening makes it possible to resolve, in the long term, the adversity that has been placed on researchers, which involves the existence of barriers that limit access conditions, whether geographical or financial. Although large commercial publishers predominantly dominate scientific production and subject it to the rules imposed by them, the Open Access movement whose first public declaration, the Budapest Declaration (BOAI), was in 2002, proposes significant changes that benefit the authors and the readers. This Movement has gained importance in Portugal since 2003, with the constitution of the first institutional repository at the national level. Institutional repositories have emerged as a tool for disseminating the scientific production of an institution to open the results of the research, both before publication and the preprint process and postprint, increase the visibility of work done by an investigator and his or her institution. The present study, which underwent an analysis of the copyright policies of INESC TEC most relevant scientific publications, allowed not only to realize that publishers are increasingly adopting policies that make it possible to self-archive publications in institutional repositories, all the work of raising awareness, not only for researchers but also for the institution and the whole society. The production of a set of recommendations, which go through the implementation of an institutional policy that encourages the self-archiving of the publications developed in the institutional scope in the repository, serves as a motto for a greater appreciation of the scientific production of INESC TEC

    Rapport : a fact-based question answering system for portuguese

    Get PDF
    Question answering is one of the longest-standing problems in natural language processing. Although natural language interfaces for computer systems can be considered more common these days, the same still does not happen regarding access to specific textual information. Any full text search engine can easily retrieve documents containing user specified or closely related terms, however it is typically unable to answer user questions with small passages or short answers. The problem with question answering is that text is hard to process, due to its syntactic structure and, to a higher degree, to its semantic contents. At the sentence level, although the syntactic aspects of natural language have well known rules, the size and complexity of a sentence may make it difficult to analyze its structure. Furthermore, semantic aspects are still arduous to address, with text ambiguity being one of the hardest tasks to handle. There is also the need to correctly process the question in order to define its target, and then select and process the answers found in a text. Additionally, the selected text that may yield the answer to a given question must be further processed in order to present just a passage instead of the full text. These issues take also longer to address in languages other than English, as is the case of Portuguese, that have a lot less people working on them. This work focuses on question answering for Portuguese. In other words, our field of interest is in the presentation of short answers, passages, and possibly full sentences, but not whole documents, to questions formulated using natural language. For that purpose, we have developed a system, RAPPORT, built upon the use of open information extraction techniques for extracting triples, so called facts, characterizing information on text files, and then storing and using them for answering user queries done in natural language. These facts, in the form of subject, predicate and object, alongside other metadata, constitute the basis of the answers presented by the system. Facts work both by storing short and direct information found in a text, typically entity related information, and by containing in themselves the answers to the questions already in the form of small passages. As for the results, although there is margin for improvement, they are a tangible proof of the adequacy of our approach and its different modules for storing information and retrieving answers in question answering systems. In the process, in addition to contributing with a new approach to question answering for Portuguese, and validating the application of open information extraction to question answering, we have developed a set of tools that has been used in other natural language processing related works, such as is the case of a lemmatizer, LEMPORT, which was built from scratch, and has a high accuracy. Many of these tools result from the improvement of those found in the Apache OpenNLP toolkit, by pre-processing their input, post-processing their output, or both, and by training models for use in those tools or other, such as MaltParser. Other tools include the creation of interfaces for other resources containing, for example, synonyms, hypernyms, hyponyms, or the creation of lists of, for instance, relations between verbs and agents, using rules

    Social Media Text Processing and Semantic Analysis for Smart Cities

    Get PDF
    With the rise of Social Media, people obtain and share information almost instantly on a 24/7 basis. Many research areas have tried to gain valuable insights from these large volumes of freely available user generated content. With the goal of extracting knowledge from social media streams that might be useful in the context of intelligent transportation systems and smart cities, we designed and developed a framework that provides functionalities for parallel collection of geo-located tweets from multiple pre-defined bounding boxes (cities or regions), including filtering of non-complying tweets, text pre-processing for Portuguese and English language, topic modeling, and transportation-specific text classifiers, as well as, aggregation and data visualization. We performed an exploratory data analysis of geo-located tweets in 5 different cities: Rio de Janeiro, S\~ao Paulo, New York City, London and Melbourne, comprising a total of more than 43 million tweets in a period of 3 months. Furthermore, we performed a large scale topic modelling comparison between Rio de Janeiro and S\~ao Paulo. Interestingly, most of the topics are shared between both cities which despite being in the same country are considered very different regarding population, economy and lifestyle. We take advantage of recent developments in word embeddings and train such representations from the collections of geo-located tweets. We then use a combination of bag-of-embeddings and traditional bag-of-words to train travel-related classifiers in both Portuguese and English to filter travel-related content from non-related. We created specific gold-standard data to perform empirical evaluation of the resulting classifiers. Results are in line with research work in other application areas by showing the robustness of using word embeddings to learn word similarities that bag-of-words is not able to capture

    Multi-step time series prediction intervals using neuroevolution

    Get PDF
    Multi-step time series forecasting (TSF) is a crucial element to support tactical decisions (e.g., designing production or marketing plans several months in advance). While most TSF research addresses only single-point prediction, prediction intervals (PIs) are useful to reduce uncertainty related to important decision making variables. In this paper, we explore a large set of neural network methods for multi-step TSF and that directly optimize PIs. This includes multi-step adaptations of recently proposed PI methods, such as lower--upper bound estimation (LUBET), its ensemble extension (LUBEXT), a multi-objective evolutionary algorithm LUBE (MLUBET) and a two-phase learning multi-objective evolutionary algorithm (M2LUBET). We also explore two new ensemble variants for the evolutionary approaches based on two PI coverage--width split methods (radial slices and clustering), leading to the MLUBEXT, M2LUBEXT, MLUBEXT2 and M2LUBEXT2 methods. A robust comparison was held by considering the rolling window procedure, nine time series from several real-world domains and with different characteristics, two PI quality measures (coverage error and width) and the Wilcoxon statistic. Overall, the best results were achieved by the M2LUBET neuroevolution method, which requires a reasonable computational effort for time series with a few hundreds of observations.This article is a result of the project NORTE-01- 0247-FEDER-017497, supported by Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF). We would also like to thank the anonymous reviewers for their helpful suggestionsinfo:eu-repo/semantics/publishedVersio

    μGIM - Microgrid intelligent management system based on a multi-agent approach and the active participation of end-users

    Get PDF
    [ES] Los sistemas de potencia y energía están cambiando su paradigma tradicional, de sistemas centralizados a sistemas descentralizados. La aparición de redes inteligentes permite la integración de recursos energéticos descentralizados y promueve la gestión inclusiva que involucra a los usuarios finales, impulsada por la gestión del lado de la demanda, la energía transactiva y la respuesta a la demanda. Garantizar la escalabilidad y la estabilidad del servicio proporcionado por la red, en este nuevo paradigma de redes inteligentes, es más difícil porque no hay una única sala de operaciones centralizada donde se tomen todas las decisiones. Para implementar con éxito redes inteligentes, es necesario combinar esfuerzos entre la ingeniería eléctrica y la ingeniería informática. La ingeniería eléctrica debe garantizar el correcto funcionamiento físico de las redes inteligentes y de sus componentes, estableciendo las bases para un adecuado monitoreo, control, gestión, y métodos de operación. La ingeniería informática desempeña un papel importante al proporcionar los modelos y herramientas computacionales adecuados para administrar y operar la red inteligente y sus partes constituyentes, representando adecuadamente a todos los diferentes actores involucrados. Estos modelos deben considerar los objetivos individuales y comunes de los actores que proporcionan las bases para garantizar interacciones competitivas y cooperativas capaces de satisfacer a los actores individuales, así como cumplir con los requisitos comunes con respecto a la sostenibilidad técnica, ambiental y económica del Sistema. La naturaleza distribuida de las redes inteligentes permite, incentiva y beneficia enormemente la participación activa de los usuarios finales, desde actores grandes hasta actores más pequeños, como los consumidores residenciales. Uno de los principales problemas en la planificación y operación de redes eléctricas es la variación de la demanda de energía, que a menudo se duplica más que durante las horas pico en comparación con la demanda fuera de pico. Tradicionalmente, esta variación dio como resultado la construcción de plantas de generación de energía y grandes inversiones en líneas de red y subestaciones. El uso masivo de fuentes de energía renovables implica mayor volatilidad en lo relativo a la generación, lo que hace que sea más difícil equilibrar el consumo y la generación. La participación de los actores de la red inteligente, habilitada por la energía transactiva y la respuesta a la demanda, puede proporcionar flexibilidad en desde el punto de vista de la demanda, facilitando la operación del sistema y haciendo frente a la creciente participación de las energías renovables. En el ámbito de las redes inteligentes, es posible construir y operar redes más pequeñas, llamadas microrredes. Esas son redes geográficamente limitadas con gestión y operación local. Pueden verse como áreas geográficas restringidas para las cuales la red eléctrica generalmente opera físicamente conectada a la red principal, pero también puede operar en modo isla, lo que proporciona independencia de la red principal. Esta investigación de doctorado, realizada bajo el Programa de Doctorado en Ingeniería Informática de la Universidad de Salamanca, aborda el estudio y el análisis de la gestión de microrredes, considerando la participación activa de los usuarios finales y la gestión energética de lascarga eléctrica y los recursos energéticos de los usuarios finales. En este trabajo de investigación se ha analizado el uso de conceptos de ingeniería informática, particularmente del campo de la inteligencia artificial, para apoyar la gestión de las microrredes, proponiendo un sistema de gestión inteligente de microrredes (μGIM) basado en un enfoque de múltiples agentes y en la participación activa de usuarios. Esta solución se compone de tres sistemas que combinan hardware y software: el emulador de virtual a realidad (V2R), el enchufe inteligente de conciencia ambiental de Internet de las cosas (EnAPlug), y la computadora de placa única para energía basada en el agente (S4E) para permitir la gestión del lado de la demanda y la energía transactiva. Estos sistemas fueron concebidos, desarrollados y probados para permitir la validación de metodologías de gestión de microrredes, es decir, para la participación de los usuarios finales y para la optimización inteligente de los recursos. Este documento presenta todos los principales modelos y resultados obtenidos durante esta investigación de doctorado, con respecto a análisis de vanguardia, concepción de sistemas, desarrollo de sistemas, resultados de experimentación y descubrimientos principales. Los sistemas se han evaluado en escenarios reales, desde laboratorios hasta sitios piloto. En total, se han publicado veinte artículos científicos, de los cuales nueve se han hecho en revistas especializadas. Esta investigación de doctorado realizó contribuciones a dos proyectos H2020 (DOMINOES y DREAM-GO), dos proyectos ITEA (M2MGrids y SPEAR), tres proyectos portugueses (SIMOCE, NetEffiCity y AVIGAE) y un proyecto con financiación en cascada H2020 (Eco-Rural -IoT)
    corecore