8,913 research outputs found

    Digital capabilities and innovation capabilities in Vietnamese SMEs

    Get PDF
    The objective of the study is to clarify the influence of digital capabilities on innovation capability, considering the direct and indirect impacts of digital leadership and digital culture in small and medium enterprises (SMEs) in Vietnam. The study used a mixed research method including qualitative with in-depth interviews and quantitative with Partial Least Squares Structural Equation Model (PLS-SEM) to analyze 271 responses. Research results show that digital capabilities have a positive impact on SMEs’ innovation capability. Digital leadership and digital culture also, directly and indirectly, affect firms’ digital capabilities and innovation capabilities. From there, the study confirms the theoretical contributions as well as proposes suggestions for SMEs to develop digital capabilities, and as a result, to develop innovation capabilities

    Long-term Contracts for Network-supportive Flexibility in Local Flexibility Markets

    Get PDF
    With an ongoing energy transition, the electric network is increasingly challenged. Handling congestion is a major responsibility of network operators. In recent years, market-based approaches to utilize network-supportive flexibility, especially local flexibility markets (LFMs), have been discussed as possible future development of congestion management processes. LFMs are a promising opportunity for the efficient, transparent and non-discriminatory integration of new flexibility options, in particular demand-side flexibility. Despite a wide body of supporting literature and several pilot implementations, there is still no common commitment to the concept of LFMs in the European Union. Here we address decision makers in the European energy economy, especially network operators, and discuss a possible flexibility product design using a methodological approach with four steps. First, we review the theoretical background of LFMs, considering both network operators' views and the possibility of demand response as a flexibility provider. Based on this review, we formulate an interim conclusion regarding requirements for flexibility product design in general. Second, using an existing framework, we propose a concrete, capacity-based, long-term flexibility product specification. Third, we discuss compliance between the defined requirements and the proposed product design to highlight the relevance of key design parameters and identify further research needs. Finally, we derive policy implications for network operators' decision makers regarding the implementation of LFMs

    Domesticating AI in medical diagnosis

    Get PDF
    We consider the anticipated adoption of Artificial Intelligence (AI) in medical diagnosis. We examine how seemingly compelling claims are tested as AI tools move into real-world settings and discuss how analysts can develop effective understandings in novel and rapidly changing settings.Four case studies highlight the challenges of utilising diagnostic AI tools at differing stages in their innovation journey. Two ‘upstream’ cases seeking to demonstrate the practical applicability of AI and two ‘downstream’ cases focusing on the roll out and scaling of more established applications.We observed an unfolding uncoordinated process of social learning capturing two key moments: i) experiments to create and establish the clinical potential of AI tools; and, ii) attempts to verify their dependability in clinical settings while extending their scale and scope. Health professionals critically appraise tool performance, relying on them selectively where their results can be demonstrably trusted, in a de facto model of responsible use. We note a shift from procuring stand-alone solutions to deploying suites of AI tools through platforms to facilitate adoption and reduce the costs of procurement, implementation and evaluation which impede the viability of stand-alone solutions.New conceptual frameworks and methodological strategies are needed to address the rapid evolution of AI tools as they move from research settings and are deployed in real-world care across multiple settings. We observe how, in this process of deployment, AI tools become ‘domesticated’. We propose longitudinal and multisite `biographical’ investigations of medical AI rather than snapshot studies of emerging technologies that fail to capture change and variation in performance across contexts

    Computer-Aided Drug Design and Drug Discovery: A Prospective Analysis

    Get PDF
    In the dynamic landscape of drug discovery, Computer-Aided Drug Design (CADD) emerges as a transformative force, bridging the realms of biology and technology. This paper overviews CADDs historical evolution, categorization into structure-based and ligand-based approaches, and its crucial role in rationalizing and expediting drug discovery. As CADD advances, incorporating diverse biological data and ensuring data privacy become paramount. Challenges persist, demanding the optimization of algorithms and robust ethical frameworks. Integrating Machine Learning and Artificial Intelligence amplifies CADDs predictive capabilities, yet ethical considerations and scalability challenges linger. Collaborative efforts and global initiatives, exemplified by platforms like Open-Source Malaria, underscore the democratization of drug discovery. The convergence of CADD with personalized medicine offers tailored therapeutic solutions, though ethical dilemmas and accessibility concerns must be navigated. Emerging technologies like quantum computing, immersive technologies, and green chemistry promise to redefine the future of CADD. The trajectory of CADD, marked by rapid advancements, anticipates challenges in ensuring accuracy, addressing biases in AI, and incorporating sustainability metrics. This paper concludes by highlighting the need for proactive measures in navigating the ethical, technological, and educational frontiers of CADD to shape a healthier, brighter future in drug discovery

    The Human Phenotype Ontology in 2024: phenotypes around the world.

    Get PDF
    The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs

    Digital twin modeling method based on IFC standards for building construction processes

    Get PDF
    Intelligent construction is a necessary way to improve the traditional construction method, and digital twin can be a crucial technology to promote intelligent construction. However, the construction field currently needs a unified method to build a standardized and universally applicable digital twin model, which is incredibly challenging in construction. Therefore, this paper proposes a general method to construct a digital twin construction process model based on the Industry Foundation Classes (IFC) standard, aiming to realize real-time monitoring, control, and visualization management of the construction site. The method constructs a digital twin fusion model from three levels: geometric model, resource model, and behavioral model by establishing an IFC semantic model of the construction process, storing the fusion model data and the construction site data into a database, and completing the dynamic interaction of the twin data in the database. At the same time, the digital twin platform is developed to realize the visualization and control of the construction site. Combined with practical cases and analysis, the implementation effect of the method is shown and verified. The results show that the method can adapt itself to different scenarios on the construction site, which is conducive to promoting application of the digital twin in the field of construction and provides a reference to the research of practicing digital twin theory and practice

    Edge-Enabled Metaverse: The Convergence of Metaverse and Mobile Edge Computing

    Get PDF
    Metaverse is a virtual environment where users are represented by their avatars to navigate a virtual world having strong links with its physical counterpart. The state-of-the-art Metaverse architectures rely on a cloud-based approach for avatar physics emulation and graphics rendering computation. The current centralized architecture of such systems is unfavorable as it suffers from several drawbacks caused by the long latency of cloud access, such as low-quality visualization. To this end, we propose a Fog-Edge hybrid computing architecture for Metaverse applications that leverage an edge-enabled distributed computing paradigm. Metaverse applications leverage edge devices' computing power to perform the required computations for heavy tasks, such as collision detection in the virtual universe and high-computational 3D physics in virtual simulations. The computational costs of a Metaverse entity, such as collision detection or physics emulation, are performed at the device of the associated physical entity. To validate the effectiveness of the proposed architecture, we simulate a distributed social Metaverse application. The simulation results show that the proposed architecture can reduce the latency by 50% when compared with cloud-based Metaverse applications
    • …
    corecore