717 research outputs found

    The Monarch Initiative in 2024: an analytic platform integrating phenotypes, genes and diseases across species.

    Get PDF
    Bridging the gap between genetic variations, environmental determinants, and phenotypic outcomes is critical for supporting clinical diagnosis and understanding mechanisms of diseases. It requires integrating open data at a global scale. The Monarch Initiative advances these goals by developing open ontologies, semantic data models, and knowledge graphs for translational research. The Monarch App is an integrated platform combining data about genes, phenotypes, and diseases across species. Monarch\u27s APIs enable access to carefully curated datasets and advanced analysis tools that support the understanding and diagnosis of disease for diverse applications such as variant prioritization, deep phenotyping, and patient profile-matching. We have migrated our system into a scalable, cloud-based infrastructure; simplified Monarch\u27s data ingestion and knowledge graph integration systems; enhanced data mapping and integration standards; and developed a new user interface with novel search and graph navigation features. Furthermore, we advanced Monarch\u27s analytic tools by developing a customized plugin for OpenAI\u27s ChatGPT to increase the reliability of its responses about phenotypic data, allowing us to interrogate the knowledge in the Monarch graph using state-of-the-art Large Language Models. The resources of the Monarch Initiative can be found at monarchinitiative.org and its corresponding code repository at github.com/monarch-initiative/monarch-app

    Digital twin modeling method based on IFC standards for building construction processes

    Get PDF
    Intelligent construction is a necessary way to improve the traditional construction method, and digital twin can be a crucial technology to promote intelligent construction. However, the construction field currently needs a unified method to build a standardized and universally applicable digital twin model, which is incredibly challenging in construction. Therefore, this paper proposes a general method to construct a digital twin construction process model based on the Industry Foundation Classes (IFC) standard, aiming to realize real-time monitoring, control, and visualization management of the construction site. The method constructs a digital twin fusion model from three levels: geometric model, resource model, and behavioral model by establishing an IFC semantic model of the construction process, storing the fusion model data and the construction site data into a database, and completing the dynamic interaction of the twin data in the database. At the same time, the digital twin platform is developed to realize the visualization and control of the construction site. Combined with practical cases and analysis, the implementation effect of the method is shown and verified. The results show that the method can adapt itself to different scenarios on the construction site, which is conducive to promoting application of the digital twin in the field of construction and provides a reference to the research of practicing digital twin theory and practice

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Development and application of a platform for harmonisation and integration of metabolomics data

    Get PDF
    Integrating diverse metabolomics data for molecular epidemiology analyses provides both opportuni- ties and challenges in the field of human health research. Combining patient cohorts may improve power and sensitivity of analyses but is challenging due to significant technical and analytical vari- ability. Additionally, current systems for the storage and analysis of metabolomics data suffer from scalability, query-ability, and integration issues that limit their adoption for molecular epidemiological research. Here, a novel platform for integrative metabolomics is developed, which addresses issues of storage, harmonisation, querying, scaling, and analysis of large-scale metabolomics data. Its use is demonstrated through an investigation of molecular trends of ageing in an integrated four-cohort dataset where the advantages and disadvantages of combining balanced and unbalanced cohorts are explored, and robust metabolite trends are successfully identified and shown to be concordant with previous studies.Open Acces

    Automatic management tool for attribution and monitorization of projects/internships

    Get PDF
    No último ano académico, os estudantes do ISEP necessitam de realizar um projeto final para obtenção do grau académico que pretendem alcançar. O ISEP fornece uma plataforma digital onde é possível visualizar todos os projetos que os alunos se podem candidatar. Apesar das vantagens que a plataforma digital traz, esta também possui alguns problemas, nomeadamente a difícil escolha de projetos adequados ao estudante devido à excessiva oferta e falta de mecanismos de filtragem. Para além disso, existe também uma indecisão acrescida para selecionar um supervisor que seja compatível para o projeto selecionado. Tendo o aluno escolhido o projeto e o supervisor, dá-se início à fase de monitorização do mesmo, que possui também os seus problemas, como o uso de diversas ferramentas que posteriormente levam a possíveis problemas de comunicação e dificuldade em manter um histórico de versões do trabalho desenvolvido. De forma a responder aos problemas mencionados, realizou-se um estudo aprofundado dos tópicos de sistemas de recomendação aplicados a Machine Learning e Learning Management Systems. Para cada um desses grandes temas, foram analisados sistemas semelhantes capazes de solucionar o problema proposto, tais como sistemas de recomendação desenvolvidos em artigos científicos, aplicações comerciais e ferramentas como o ChatGPT. Através da análise do estado da arte, concluiu-se que a solução para os problemas propostos seria a criação de uma aplicação Web para alunos e supervisores, que juntasse as duas temáticas analisadas. O sistema de recomendação desenvolvido possui filtragem colaborativa com factorização de matrizes, e filtragem por conteúdo com semelhança de cossenos. As tecnologias utilizadas no sistema centram-se em Python no back-end (com o uso de TensorFlow e NumPy para funcionalidades de Machine Learning) e Svelte no front-end. O sistema foi inspirado numa arquitetura em microsserviços em que cada serviço é representado pelo seu próprio contentor de Docker, e disponibilizado ao público através de um domínio público. O sistema foi avaliado através de três métricas: performance, confiabilidade e usabilidade. Foi utilizada a ferramenta Quantitative Evaluation Framework para definir dimensões, fatores e requisitos(e respetivas pontuações). Os estudantes que testaram a solução avaliaram o sistema de recomendação com um valor de aproximadamente 7 numa escala de 1 a 10, e os valores de precision, recall, false positive rate e F-Measure foram avaliados em 0.51, 0.71, 0.23 e 0.59 respetivamente. Adicionalmente, ambos os grupos classificaram a aplicação como intuitiva e de fácil utilização, com resultados a rondar o 8 numa escala de 1 em 10.In the last academic year, students at ISEP need to complete a final project to obtain the academic degree they aim to achieve. ISEP provides a digital platform where all the projects that students can apply for can be viewed. Besides the advantages this platform has, it also brings some problems, such as the difficult selection of projects suited for the student due to the excessive offering and lack of filtering mechanisms. Additionally, there is also increased difficulty in selecting a supervisor compatible with their project. Once the student has chosen the project and the supervisor, the monitoring phase begins, which also has its issues, such as using various tools that may lead to potential communication problems and difficulty in maintaining a version history of the work done. To address the mentioned problems, an in-depth study of recommendation systems applied to Machine Learning and Learning Management Systems was conducted. For each of these themes, similar systems that could solve the proposed problem were analysed, such as recommendation systems developed in scientific papers, commercial applications, and tools like ChatGPT. Through the analysis of the state of the art, it was concluded that the solution to the proposed problems would be the creation of a web application for students and supervisors that combines the two analysed themes. The developed recommendation system uses collaborative filtering with matrix factorization and content-based filtering with cosine similarity. The technologies used in the system are centred around Python on the backend (with the use of TensorFlow and NumPy for Machine Learning functionalities) and Svelte on the frontend. The system was inspired by a microservices architecture, where each service is represented by its own Docker container, and it was made available online through a public domain. The system was evaluated through performance, reliability, and usability. The Quantitative Evaluation Framework tool was used to define dimensions, factors, and requirements (and their respective scores). The students who tested the solution rated the recommendation system with a value of approximately 7 on a scale of 1 to 10, and the precision, recall, false positive rate, and F-Measure values were evaluated at 0.51, 0.71, 0.23, and 0.59, respectively. Additionally, both groups rated the application as intuitive and easy to use, with ratings around 8 on a scale of 1 to 10

    The American Multi-modal Energy System: Model Development with Structural and Behavioral Analysis using Hetero-functional Graph Theory

    Get PDF
    In the 21st century, infrastructure is playing an ever greater role in our daily lives. Presidential Policy Directive 21 emphasizes that infrastructure is critical to public confidence, the nation\u27s safety, and its well-being. With global climate change demanding a host of changes across at least four critical energy infrastructures: the electric grid, the natural gas system, the oil system, and the coal system, it is imperative to study models of these infrastructures to guide future policies and infrastructure developments. Traditionally these energy systems have been studied independently, usually in their own fields of study. Therefore, infrastructure datasets often lack the structural and dynamic elements to describe the interdependencies with other infrastructures. This thesis refers to the integration of the aforementioned energy infrastructures into a singular system-of-systems within the context of the United States of America as the American Multi-modal Energy System (AMES). This work develops an open-source structural and behavioral model of the AMES using Hetero-functional Graph Theory (HFGT), a data-driven approach, and model-based systems engineering practices in the following steps. First, the HFGT toolbox code is made available on GitHub and advanced to produce HFGs of systems on the scale of the AMES using the languages Python and Julia. Second, the analytical insights that HFGs can provide relative to formal graphs are investigated through structural analysis of the American Electric Power System which demonstrates how HFGs are better equipped to describe changes in system behavior. Third, a reference architecture of the AMES is developed, providing a standardized foundation to develop future models of the AMES. Fourth, the AMES reference architecture is instantiated into a structural model from which structural properties are investigated. Finally, a physically informed Weighted Least Squares Error Hetero-functional Graph State Estimation analysis of the AMES\u27 socio-economic behavior is implemented to investigate the behavior of the AMES with asset level granularity. These steps provide a reproducible and reusable structural and behavioral model of the AMES for guiding future policies and infrastructural developments to critical energy infrastructures

    Context-aware and user bahavior-based continuous authentication for zero trust access control in smart homes

    Get PDF
    Orientador: Aldri Luiz dos SantosDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 24/02/2023Inclui referências: p. 96-106Área de concentração: Ciência da ComputaçãoResumo: Embora as casas inteligentes tenham se tornado populares recentemente, as pessoas ainda estão muito preocupadas com questões de segurança, proteção e privacidade. Estudos revelaram que questões de privacidade das pessoas geram prejuízos fisiológicos e financeiros porque as casas inteligentes são ambientes de convivência íntima. Além disso, nossa pesquisa revelou que os ataques de impersonificação são uma das ameaças mais graves contra casas inteligentes porque comprometem a confidencialidade, autenticidade, integridade e não repúdio. Normalmente, abordagens para construir segurança para Sistemas de Casas Inteligentes (SHS) requerem dados históricos para implementar controle de acesso e Sistemas de Detecção de Intrusão (IDS), uma vulnerabilidade à privacidade dos habitantes. Além disso, a maioria dos trabalhos depende de computação em nuvem ou recursos na nuvem para executar tarefas de segurança, que os invasores podem atacar para atingir a confidencialidade, integridade e disponibilidade. Além disso, os pesquisadores não consideram o uso indevido de SHS ao forçar os usuários a interagir com os dispositivos por meio de seus smartphones ou tablets, pois eles costumam interagir por qualquer meio, como assistentes virtuais e os próprios dispositivos. Portanto, os requisitos do sistema de segurança para residências inteligentes devem compreender percepção de privacidade, resposta de baixa latência, localidade espacial e temporal, extensibilidade de dispositivo, proteção contra impersonificação, isolamento de dispositivo, garantia de controle de acesso e levar em consideração a verificação atualizada com um sistema confiável. Para atender a esses requisitos, propomos o sistema ZASH (Zero-Aware Smart Home) para fornecer controle de acesso para as ações do usuário em dispositivos em casas inteligentes. Em contraste com os trabalhos atuais, ele aproveita a autenticação contínua com o paradigma de Confiança Zero suportado por ontologias configuradas, contexto em tempo real e atividade do usuário. A computação de borda e a Cadeia de Markov permitem que o ZASH evite e mitigue ataques de impersonificação que visam comprometer a segurança dos usuários. O sistema depende apenas de recursos dentro de casa, é autossuficiente e está menos exposto à exploração externa. Além disso, funciona desde o dia zero sem a exigência de dados históricos, embora conte com o passar do tempo para monitorar o comportamento dos usuários. O ZASH exige prova de identidade para que os usuários confirmem sua autenticidade por meio de características fortes da classe Something You Are. O sistema executa o controle de acesso nos dispositivos inteligentes, portanto, não depende de intermediários e considera qualquer interação usuário-dispositivo. A princípio, um teste inicial de algoritmos com um conjunto de dados sintético demonstrou a capacidade do sistema de se adaptar dinamicamente aos comportamentos de novos usuários, bloqueando ataques de impersonificação. Por fim, implementamos o ZASH no simulador de rede ns-3 e analisamos sua robustez, eficiência, extensibilidade e desempenho. De acordo com nossa análise, ele protege a privacidade dos usuários, responde rapidamente (cerca de 4,16 ms), lida com a adição e remoção de dispositivos, bloqueia a maioria dos ataques de impersonificação (até 99% com uma configuração adequada), isola dispositivos inteligentes e garante o controle de acesso para todas as interações.Abstract: Although smart homes have become popular recently, people are still highly concerned about security, safety, and privacy issues. Studies revealed that issues in people's privacy generate physiological and financial harm because smart homes are intimate living environments. Further, our research disclosed that impersonation attacks are one of the most severe threats against smart homes because they compromise confidentiality, authenticity, integrity, and non-repudiation. Typically, approaches to build security for Smart Home Systems (SHS) require historical data to implement access control and Intrusion Detection Systems (IDS), a vulnerability to the inhabitant's privacy. Additionally, most works rely on cloud computing or resources in the cloud to perform security tasks, which attackers can exploit to target confidentiality, integrity, and availability. Moreover, researchers do not regard the misuse of SHS by forcing users to interact with devices through their smartphones or tablets, as they usually interact by any means, like virtual assistants and devices themselves. Therefore, the security system requirements for smart homes should comprehend privacy perception, low latency in response, spatial and temporal locality, device extensibility, protection against impersonation, device isolation, access control enforcement, and taking into account the refresh verification with a trustworthy system. To attend to those requirements, we propose the ZASH (Zero-Aware Smart Home) system to provide access control for the user's actions on smart devices in smart homes. In contrast to current works, it leverages continuous authentication with the Zero Trust paradigm supported by configured ontologies, real-time context, and user activity. Edge computing and Markov Chain enable ZASH to prevent and mitigate impersonation attacks that aim to compromise users' security. The system relies only on resources inside the house, is self-sufficient, and is less exposed to outside exploitation. Furthermore, it works from day zero without the requirement of historical data, though it counts on that as time passes to monitor the users' behavior. ZASH requires proof of identity for users to confirm their authenticity through strong features of the Something You Are class. The system enforces access control in smart devices, so it does not depend on intermediaries and considers any user-device interaction. At first, an initial test of algorithms with a synthetic dataset demonstrated the system's capability to dynamically adapt to new users' behaviors withal blocking impersonation attacks. Finally, we implemented ZASH in the ns-3 network simulator and analyzed its robustness, efficiency, extensibility, and performance. According to our analysis, it protects users' privacy, responds quickly (around 4.16 ms), copes with adding and removing devices, blocks most impersonation attacks (up to 99% with a proper configuration), isolates smart devices, and enforces access control for all interactions
    corecore