134 research outputs found

    Navigating Data Warehousing Implementation in Jordanian Healthcare Sector: Challenges and Opportunities

    Get PDF
    Introduction: The implementation of data warehouse systems offers great potential for improving patient care, operational efficiency, and strategic decision-making. This study explores the challenges and opportunities of implementing data storage solutions in the Jordanian healthcare industry. Objectives: To investigate current data management practices, perceptions of data warehouses, and factors influencing adoption readiness among IT professionals in Jordanian healthcare organizations. Methods: A survey was conducted involving 102 IT professionals from various healthcare organizations in Jordan. Participants responded to a structured questionnaire, providing insights into key benefits, expected challenges, technical requirements, and future prospects for data warehousing in their organizations. Results: The study demonstrated the critical role of data warehouses in enhancing decision-making, patient care coordination, and operational efficiency within the Jordanian healthcare system. However, significant challenges such as data integration, security concerns, and regulatory compliance were identified. Conclusions: The paper provides recommendations to address these challenges and maximize the benefits of healthcare data warehouses in Jordan. Key strategies include investing in technical expertise, ensuring compatibility with existing systems, and improving data management practices. This study enhances understanding of the complexities associated with implementing data warehousing in the Jordanian healthcare industry and offers valuable insights for future research and practice in this evolving field

    Data Extraction, Transformation, and Loading Process Automation for Algorithmic Trading Machine Learning Modelling and Performance Optimization

    Full text link
    A data warehouse efficiently prepares data for effective and fast data analysis and modelling using machine learning algorithms. This paper discusses existing solutions for the Data Extraction, Transformation, and Loading (ETL) process and automation for algorithmic trading algorithms. Integrating the Data Warehouses and, in the future, the Data Lakes with the Machine Learning Algorithms gives enormous opportunities in research when performance and data processing time become critical non-functional requirements

    In-memory business intelligence: a Wits context

    Get PDF
    The organisational demand for real-time, flexible and cheaper approaches to Business Intelligence is impacting the Business Intelligence ecosystem. In-memory databases, in-memory analytics, the availability of 64 bit computing power, as well as the reduced costs of memory, are enabling technologies to meet this demand. This research report examines whether these technologies will have an evolutionary or a revolutionary impact on traditional Business Intelligence implementations. An in-memory analytic solution was developed for University of the Witwatersrand Procurement Office, to evaluate the benefits claimed for the in-memory approach for Business intelligence, in the development, reporting and analysis processes. A survey was used to collect data on the users' experience when using an in-memory solution. The results indicate that the in-memory solution offers a fast, flexible and visually rich user experience. However, there are certain key steps of the traditional BI approach that cannot be omitted. The conclusion reached is that the in-memory approach to Business Intelligence can co-exist with the traditional Business Intelligence approach, so that the merits of both approaches can be leveraged to enhance value for an organisation

    Automatización de la integración de la producción científica en los sistemas institucionales de gestión de la investigación

    Get PDF
    La tesis doctoral parte de la hipótesis de que es posible automatizar los procesos de integración de la información bibliográfica mediante la ingesta desde bases de datos estructuradas hacia los sistemas de gestión de la investigación

    Arquiteturas federadas para integração de dados biomédicos

    Get PDF
    Doutoramento Ciências da ComputaçãoThe last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.A adoção sucessiva das tecnologias de comunicação e de informação na área da saúde tem permitido um aumento na diversidade e na qualidade dos serviços prestados, mas, ao mesmo tempo, tem gerado uma enorme quantidade de dados, cujo valor científico está ainda por explorar. A partilha e o acesso integrado a esta informação poderá permitir a identificação de novas descobertas que possam conduzir a melhores diagnósticos e a melhores tratamentos clínicos. Esta tese propõe novos modelos de integração e de exploração de dados com vista à extração de conhecimento biomédico a partir de múltiplas fontes de dados. A primeira contribuição é uma arquitetura baseada em nuvem para partilha de serviços de imagem médica. Esta solução oferece um mecanismo de registo simplificado para fornecedores e serviços, permitindo o acesso remoto e facilitando a integração de diferentes fontes de dados. A segunda proposta é uma arquitetura baseada em sensores para integração de registos electrónicos de pacientes. Esta estratégia segue um modelo de integração federado e tem como objetivo fornecer uma solução escalável que permita a pesquisa em múltiplos sistemas de informação. Finalmente, o terceiro contributo é um sistema aberto para disponibilizar dados de pacientes num contexto europeu. Todas as soluções foram implementadas e validadas em cenários reais

    From Data to Knowledge in Secondary Health Care Databases

    Get PDF
    The advent of big data in health care is a topic receiving increasing attention worldwide. In the UK, over the last decade, the National Health Service (NHS) programme for Information Technology has boosted big data by introducing electronic infrastructures in hospitals and GP practices across the country. This ever growing amount of data promises to expand our understanding of the services, processes and research. Potential bene�ts include reducing costs, optimisation of services, knowledge discovery, and patient-centred predictive modelling. This thesis will explore the above by studying over ten years worth of electronic data and systems in a hospital treating over 750 thousand patients a year. The hospital's information systems store routinely collected data, used primarily by health practitioners to support and improve patient care. This raw data is recorded on several di�erent systems but rarely linked or analysed. This thesis explores the secondary uses of such data by undertaking two case studies, one on prostate cancer and another on stroke. The journey from data to knowledge is made in each of the studies by traversing critical steps: data retrieval, linkage, integration, preparation, mining and analysis. Throughout, novel methods and computational techniques are introduced and the value of routinely collected data is assessed. In particular, this thesis discusses in detail the methodological aspects of developing clinical data warehouses from routine heterogeneous data and it introduces methods to model, visualise and analyse the journeys that patients take through care. This work has provided lessons in hospital IT provision, integration, visualisation and analytics of complex electronic patient records and databases and has enabled the use of raw routine data for management decision making and clinical research in both case studies

    Metodología dirigida por modelos para las pruebas de un sistema distribuido multiagente de fabricación

    Get PDF
    Las presiones del mercado han empujado a las empresas de fabricación a reducir costes a la vez que mejoran sus productos, especializándose en las actividades sobre las que pueden añadir valor y colaborando con especialistas de las otras áreas para el resto. Estos sistemas distribuidos de fabricación conllevan nuevos retos, dado que es difícil integrar los distintos sistemas de información y organizarlos de forma coherente. Esto ha llevado a los investigadores a proponer una variedad de abstracciones, arquitecturas y especificaciones que tratan de atacar esta complejidad. Entre ellas, los sistemas de fabricación holónicos han recibido una atención especial: ven las empresas como redes de holones, entidades que a la vez están formados y forman parte de varios otros holones. Hasta ahora, los holones se han implementado para control de fabricación como agentes inteligentes autoconscientes, pero su curva de aprendizaje y las dificultades a la hora de integrarlos con sistemas tradicionales han dificultado su adopción en la industria. Por otro lado, su comportamiento emergente puede que no sea deseable si se necesita que las tareas cumplan ciertas garantías, como ocurren en las relaciones de negocio a negocio o de negocio a cliente y en las operaciones de alto nivel de gestión de planta. Esta tesis propone una visión más flexible del concepto de holón, permitiendo que se sitúe en un espectro más amplio de niveles de inteligencia, y defiende que sea mejor implementar los holones de negocio como servicios, componentes software que pueden ser reutilizados a través de tecnologías estándar desde cualquier parte de la organización. Estos servicios suelen organizarse como catálogos coherentes, conocidos como Arquitecturas Orientadas a Servicios (‘Service Oriented Architectures’ o SOA). Una iniciativa SOA exitosa puede reportar importantes beneficios, pero no es una tarea trivial. Por este motivo, se han propuesto muchas metodologías SOA en la literatura, pero ninguna de ellas cubre explícitamente la necesidad de probar los servicios. Considerando que la meta de las SOA es incrementar la reutilización del software en la organización, es una carencia importante: tener servicios de alta calidad es crucial para una SOA exitosa. Por este motivo, el objetivo principal de la presente Tesis es definir una metodología extendida que ayude a los usuarios a probar los servicios que implementan a sus holones de negocio. Tras considerar las opciones disponibles, se tomó la metodología dirigida por modelos SODM como punto de partida y se reescribió en su mayor parte con el framework Epsilon de código abierto, permitiendo a los usuarios que modelen su conocimiento parcial sobre el rendimiento esperado de los servicios. Este conocimiento parcial es aprovechado por varios nuevos algoritmos de inferencia de requisitos de rendimiento, que extraen los requisitos específicos de cada servicio. Aunque el algoritmo de inferencia de peticiones por segundo es sencillo, el algoritmo de inferencia de tiempos límite pasó por numerosas revisiones hasta obtener el nivel deseado de funcionalidad y rendimiento. Tras una primera formulación basada en programación lineal, se reemplazó con un algoritmo sencillo ad-hoc que recorría el grafo y después con un algoritmo incremental mucho más rápido y avanzado. El algoritmo incremental produce resultados equivalentes y tarda mucho menos, incluso con modelos grandes. Para sacar más partidos de los modelos, esta Tesis también propone un enfoque general para generar artefactos de prueba para múltiples tecnologías a partir de los modelos anotados por los algoritmos. Para evaluar la viabilidad de este enfoque, se implementó para dos posibles usos: reutilizar pruebas unitarias escritas en Java como pruebas de rendimiento, y generar proyectos completos de prueba de rendimiento usando el framework The Grinder para cualquier Servicio Web que esté descrito usando el estándar Web Services Description Language. La metodología completa es finalmente aplicada con éxito a un caso de estudio basado en un área de fabricación de losas cerámicas rectificadas de un grupo de empresas español. En este caso de estudio se parte de una descripción de alto nivel del negocio y se termina con la implementación de parte de uno de los holones y la generación de pruebas de rendimiento para uno de sus Servicios Web. Con su soporte para tanto diseñar como implementar pruebas de rendimiento de los servicios, se puede concluir que SODM+T ayuda a que los usuarios tengan una mayor confianza en sus implementaciones de los holones de negocio observados en sus empresas

    GLOBIS-B Deliverable D3.1: Technical issues and risks associated with general challenges of provisioning research infrastructures to deliver capabilities for EBV processing.

    Get PDF
    This GLOBIS-B project deliverable describes technical issues and risks associated with the general challenges of provisioning Research Infrastructures to deliver capabilities for processing Essential Biodiversity Variables (EBV). It is the result of preparations for two workshops and the workshops themselves, documenting insights to the problem of how cooperating biodiversity Research Infrastructures can contribute to the harmonised implementation of EBVs by focusing on offering data, workflows and computational services. There are several unresolved general challenges and considerations associated with harmonised EBV implementation, and this report places the technical issues and risks in the context of those. These general challenges are about assigning responsibility for EBV production, understanding the EBV production cycle, and needs in terms of data model structure and applicable standards. It is likely that new infrastructure will be needed to provide the tools and workflows for EBV production. A single and shared understanding of strategy toward the technical aspects of EBV data products production is needed among all stakeholders. There is a significant unresolved problem in terms of the nature of the translation process that must be followed to move from frontier research that proves the principles of EBVs to a state of regular mass production of EBV data products, akin to climate variable data products. This is described. We recommend developing a technical roadmap for the next 3 – 5 years, in conjunction with identifying first steps to gain practical experience of the issues associated with producing EBV data products
    corecore