64 research outputs found

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    An Information Exchange Framework For BIM, BAS, And IoT Data Using Semantic Web Technologies

    Get PDF
    With digital technologies like Building Information Modeling (BIM), Internet of Things (IoT), and Building Automation System (BAS), an increasing amount of data is being created. Data silos in Architecture Engineering and Construction (AEC) industry emerged. The isolation between BIM-based building contextual information, IoT devices’ time-series data, and BAS metadata still exist. This research aims to develop a framework to facilitate information exchange between BIM-based building contextual data, IoT devices’ time-series data, and BAS metadata using Semantic Web technology. This research: i) conducts a comprehensive literature review on BIM and IoT integration based on domains of application and integration methods to summarized an optimal current approach; ii) proposes a framework which enables information exchange among semantically described building contextual data, BAS metadata, and time-series data; iii) the proposed framework uses BOT and BRICK schema to describe building contextual data and BAS metadata; iv) creates an MVD for BIM assisted BAS design and information exchange using BACnet and IFC use case; v) validates the framework with the use case and data from Georgia Tech campus buildings.Ph.D

    pHealth 2021. Proc. of the 18th Internat. Conf. on Wearable Micro and Nano Technologies for Personalised Health, 8-10 November 2021, Genoa, Italy

    Get PDF
    Smart mobile systems – microsystems, smart textiles, smart implants, sensor-controlled medical devices – together with related body, local and wide-area networks up to cloud services, have become important enablers for telemedicine and the next generation of healthcare services. The multilateral benefits of pHealth technologies offer enormous potential for all stakeholder communities, not only in terms of improvements in medical quality and industrial competitiveness, but also for the management of healthcare costs and, last but not least, the improvement of patient experience. This book presents the proceedings of pHealth 2021, the 18th in a series of conferences on wearable micro and nano technologies for personalized health with personal health management systems, hosted by the University of Genoa, Italy, and held as an online event from 8 – 10 November 2021. The conference focused on digital health ecosystems in the transformation of healthcare towards personalized, participative, preventive, predictive precision medicine (5P medicine). The book contains 46 peer-reviewed papers (1 keynote, 5 invited papers, 33 full papers, and 7 poster papers). Subjects covered include the deployment of mobile technologies, micro-nano-bio smart systems, bio-data management and analytics, autonomous and intelligent systems, the Health Internet of Things (HIoT), as well as potential risks for security and privacy, and the motivation and empowerment of patients in care processes. Providing an overview of current advances in personalized health and health management, the book will be of interest to all those working in the field of healthcare today

    Modeling of query languages and applications in code refactoring and code optimization

    Get PDF
    Проблем садржаности упита један је од фундаменталних проблема у рачунар- ским наукама, иницијално дефинисан за релационе упите. Са растућом популарношћу SPARQL упитног језика, проблем постаје релевантан и актуелан и у овом новом контексту. У тези је представљен оригинални приступ решавању овог проблема заснован на сво- ђењу на задовољивост у логици првог реда. Подржана је садржаност упита узимајући у обзир RDF схему, а разматра се и релација стапања, као слабија форма садржаности. Доказана је сагласност и потпуност предложеног приступа на широком спектру језич- ких конструката. Описана је и његова имплементација, у виду решавача SPECS, чији је кôд јавно доступан. Представљени су резултати детаљне експерименаталне евалуације на релевантним скуповима примера за тестирање који показују да је SPECS ефикасан, и да у поређењу са осталим савременим решавачима истог проблема даје прецизније ре- зултате у краћем времену, уз бољу покривеност језичких конструката. Једна од примена моделовања упитних језика може бити и при рефакторисању апликација које присту- пају базама података. У таквим ситуацијама, врло су честе измене којима се мењају и упити и кôд на језику у коме се они позивају. Такве промене могу сачувати укупну еквивалентност кода, док на нивоу појединачних делова еквивалентност не мора бити одржана. Коришћење алата за аутоматску верификацију еквивалентности рефактори- саног кода може да дâ гаранцију задржавања понашања програма и од суштинског је значаја за поуздан развој софтвера. Са том мотивацијом, у тези се разматра и модело- вање SQL упита у теоријама логике првог реда, којим се омогућава аутоматска провера еквивалентности C/C++ програма са уграђеним SQL-ом, што је и имплементирано у виду јавно доступног алата отвореног кода SQLAV.The query containment problem is a very important computer science problem, originally defined for relational queries. With the growing popularity of the SPARQL query language, it became relevant and important in this new context, too. This thesis introduces a new approach for solving this problem, based on a reduction to satisfiability in first order logic. The approach covers containment under RDF SCHEMA entailment regime, and it can deal with the subsumption relation, as a weaker form of containment. The thesis proves soundness and completeness of the approach for a wide range of language constructs. It also describes an implementation of the approach as an open source solver SPECS. The experimental evaluation on relevant benchmarks shows that SPECS is efficient and comparing to state-of-the-art solvers, it gives more precise results in a shorter amount of time, while supporting a larger fragment of SPARQL constructs. An application of query language modeling can be useful also along refactoring of database driven applications, where simultaneous changes that include both a query and a host language code are very common. These changes can preserve the overall equivalence, without preserving equivalence of these two parts considered separately. Because of the ability to guarantee the absence of differences in behavior between two versions of the code, tools that automatically verify code equivalence have great benefits for reliable software development. With this motivation, a custom first-order logic modeling of SQL queries is developed and described in the thesis. It enables an automated approach for reasoning about equivalence of C/C++ programs with embedded SQL. The approach is implemented within a publicly available and open source framework SQLAV

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    31th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers

    BIM in the construction industry

    Get PDF
    En las últimas décadas, el término modelado de información de construcción (BIM) se ha mencionado en una amplia gama de esfuerzos de investigación de la construcción. BIM es una nueva solución para la recesión sin precedentes en la industria de la construcción, es decir, pérdida de productividad, escasez de mano de obra, sobrecostos y competitividad severa. La tecnología BIM proporciona muchos beneficios: detección rápida de conflictos de diseño, regulación automática de diseño algoritmo de verificación, visualización de realidad virtual/aumentada y entorno de trabajo de colaboración. BIM los expertos, así como los profesionales de la industria, enfatizan la importancia de las aplicaciones BIM en el campo de construcción. Dado el rápido desarrollo y adopción de BIM en la arquitectura, ingeniería, y construcción (AEC), están surgiendo nuevas tendencias relevantes para la investigación de BIM, siendo sumamente útil no sólo para los académicos sino también para los profesionales.In recent decades, the term building information modeling (BIM) has been mentioned in a wide range of construction research endeavors. BIM is a new solution for unprecedented recession in the construction industry, i.e., productivity loss, labor shortage, cost overrun, and severe competitiveness. BIM technology provides many benefits: prompt design clash detection, automatic deign regulatory check algorithm, augmented/virtual reality visualization, and collaboration work environment. BIM experts as well as industry practitioners are stressing the importance of BIM applications in the field of construction. Given the rapid development and adoption of BIM in the architecture, engineering, and construction (AEC) industry, new trends relevant to the research of BIM are emerging, being exceedingly helpful not only for academics but also for practitioners

    STRUCTURAL AND LEXICAL METHODS FOR AUDITING BIOMEDICAL TERMINOLOGIES

    Get PDF
    Biomedical terminologies serve as knowledge sources for a wide variety of biomedical applications including information extraction and retrieval, data integration and management, and decision support. Quality issues of biomedical terminologies, if not addressed, could affect all downstream applications that use them as knowledge sources. Therefore, Terminology Quality Assurance (TQA) has become an integral part of the terminology management lifecycle. However, identification of potential quality issues is challenging due to the ever-growing size and complexity of biomedical terminologies. It is time-consuming and labor-intensive to manually audit them and hence, automated TQA methods are highly desirable. In this dissertation, systematic and scalable methods to audit biomedical terminologies utilizing their structural as well as lexical information are proposed. Two inference-based methods, two non-lattice-based methods and a deep learning-based method are developed to identify potentially missing hierarchical (or is-a) relations, erroneous is-a relations, and missing concepts in biomedical terminologies including the Gene Ontology (GO), the National Cancer Institute thesaurus (NCIt), and SNOMED CT. In the first inference-based method, the GO concept names are represented using set-of-words model and sequence-of-words model, respectively. Inconsistencies derived between hierarchical linked and unlinked concept pairs are leveraged to detect potentially missing or erroneous is-a relations. The set-of-words model detects a total of 5,359 potential inconsistencies in the 03/28/2017 release of GO and the sequence-of-words model detects 4,959. Domain experts’ evaluation shows that the set-of-words model achieves a precision of 53.78% (128 out of 238) and the sequence-of-words model achieves a precision of 57.55% (122 out of 212) in identifying inconsistencies. In the second inference-based method, a Subsumption-based Sub-term Inference Framework (SSIF) is developed by introducing a novel term-algebra on top of a sequence-based representation of GO concepts. The sequence-based representation utilizes the part of speech of concept names, sub-concepts (concept names appearing inside another concept name), and antonyms appearing in concept names. Three conditional rules (monotonicity, intersection, and sub-concept rules) are developed for backward subsumption inference. Applying SSIF to the 10/03/2018 release of GO suggests 1,938 potentially missing is-a relations. Domain experts’ evaluation of randomly selected 210 potentially missing is-a relations shows that SSIF achieves a precision of 60.61%, 60.49%, and 46.03% for the monotonicity, intersection, and sub-concept rules, respectively. In the first non-lattice-based method, lexical patterns of concepts in Non-Lattice Subgraphs (NLSs: graph fragments with a higher tendency to contain quality issues), are mined to detect potentially missing is-a relations and missing concepts in NCIt. Six lexical patterns: containment, union, intersection, union-intersection, inference-contradiction, and inference-union are leveraged. Each pattern indicates a potential specific type of error and suggests a potential type of remediation. This method identifies 809 NLSs exhibiting these patterns in the 16.12d version of NCIt, achieving a precision of 66% (33 out of 50). In the second non-lattice-based method, enriched lexical attributes from concept ancestors are leveraged to identify potentially missing is-a relations in NLSs. The lexical attributes of a concept are inherited in two ways: from ancestors within the NLS, and from all the ancestors. For a pair of concepts without a hierarchical relation, if the lexical attributes of one concept is a subset of that of the other, a potentially missing is-a relation between the two concepts is suggested. This method identifies a total of 1,022 potentially missing is-a relations in the 19.01d release of NCIt with a precision of 84.44% (76 out of 90) for inheriting lexical attributes from ancestors within the NLS and 89.02% (73 out of 82) for inheriting from all the ancestors. For the non-lattice-based methods, similar NLSs may contain similar quality issues, and thus exhaustive examination of NLSs would involve redundant work. A hybrid method is introduced to identify similar NLSs to avoid redundant analyses. Given an input NLS, a graph isomorphism algorithm is used to obtain its structurally identical NLSs. A similarity score between the input NLS and each of its structurally identical NLSs is computed based on semantic similarity between their corresponding concept names. To compute the similarity between concept names, the concept names are converted to vectors using the Doc2Vec document embedding model and then the cosine similarity of the two vectors is computed. All the structurally identical NLSs with a similarity score above 0.85 is considered to be similar to the input NLS. Applying this method to 10 different structures of NLSs in the 02/12/2018 release of GO reveals that 38.43% of these NLSs have at least one similar NLS. Finally, a deep learning-based method is explored to facilitate the suggestion of missing is-a relations in NCIt and SNOMED CT. Concept pairs exhibiting a containment pattern is the focus here. The problem is framed as a binary classification task, where given a pair of concepts, the deep learning model learns to predict whether the two concepts have an is-a relation or not. Positive training samples are existing is-a relations in the terminology exhibiting containment pattern. Negative training samples are concept-pairs without is-a relations that are also exhibiting containment pattern. A graph neural network model is constructed for this task and trained with subgraphs generated enclosing the pairs of concepts in the samples. To evaluate each model trained by the two terminologies, two evaluation sets are created considering newer releases of each terminology as a partial reference standard. The model trained on NCIt achieves a precision of 0.5, a recall of 0.75, and an F1 score of 0.6. The model trained on SNOMED CT achieves a precision of 0.51, a recall of 0.64 and an F1 score of 0.56

    Foundations of Fuzzy Logic and Semantic Web Languages

    Get PDF
    This book is the first to combine coverage of fuzzy logic and Semantic Web languages. It provides in-depth insight into fuzzy Semantic Web languages for non-fuzzy set theory and fuzzy logic experts. It also helps researchers of non-Semantic Web languages get a better understanding of the theoretical fundamentals of Semantic Web languages. The first part of the book covers all the theoretical and logical aspects of classical (two-valued) Semantic Web languages. The second part explains how to generalize these languages to cope with fuzzy set theory and fuzzy logic
    corecore