2,328 research outputs found

    Design and implementation of an automatic nursing assessment system based on CDSS technology

    Get PDF
    BACKGROUND: Various quantitative and quality assessment tools are currently used in nursing to evaluate a patient's physiological, psychological, and socioeconomic status. The results play important roles in evaluating the efficiency of healthcare, improving the treatment plans, and lowing relevant clinical risks. However, the manual process of the assessment imposes a substantial burden and can lead to errors in digitalization. To fill these gaps, we proposed an automatic nursing assessment system based on clinical decision support system (CDSS). The framework underlying the CDSS included experts, evaluation criteria, and voting roles for selecting electronic assessment sheets over paper ones.METHODS: We developed the framework based on an expert voting flow to choose electronic assessment sheets. The CDSS was constructed based on a nursing process workflow model. A multilayer architecture with independent modules was used. The performance of the proposed system was evaluated by comparing the adverse events' incidence and the average time for regular daily assessment before and after the implementation.RESULTS: After implementation of the system, the adverse nursing events' incidence decreased significantly from 0.43 % to 0.37 % in the first year and further to 0.27 % in the second year (p-value: 0.04). Meanwhile, the median time for regular daily assessments further decreased from 63 s to 51 s.CONCLUSIONS: The automatic assessment system helps to reduce nurses' workload and the incidence of adverse nursing events.</p

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Research on the structure function recognition of PLOS

    Get PDF
    PurposeThe present study explores and investigates the efficiency of deep learning models in identifying discourse structure and functional features and explores the potential application of natural language processing (NLP) techniques in text mining, information measurement, and scientific communication.MethodThe PLOS literature series has been utilized to obtain full-text data, and four deep learning models, including BERT, RoBERTa, SciBERT, and SsciBERT, have been employed for structure-function recognition.ResultThe experimental findings reveal that the SciBERT model performs outstandingly, surpassing the other models, with an F1 score. Additionally, the performance of different paragraph structures has been analyzed, and it has been found that the model performs well in paragraphs such as method and result.ConclusionThe study's outcomes suggest that deep learning models can recognize the structure and functional elements at the discourse level, particularly for scientific literature, where the SciBERT model performs remarkably. Moreover, the NLP techniques have extensive prospects in various fields, including text mining, information measurement, and scientific communication. By automatically parsing and identifying structural and functional information in text, the efficiency of literature management and retrieval can be improved, thereby expediting scientific research progress. Therefore, deep learning and NLP technologies hold significant value in scientific research

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Digitalization and Development

    Get PDF
    This book examines the diffusion of digitalization and Industry 4.0 technologies in Malaysia by focusing on the ecosystem critical for its expansion. The chapters examine the digital proliferation in major sectors of agriculture, manufacturing, e-commerce and services, as well as the intermediary organizations essential for the orderly performance of socioeconomic agents. The book incisively reviews policy instruments critical for the effective and orderly development of the embedding organizations, and the regulatory framework needed to quicken the appropriation of socioeconomic synergies from digitalization and Industry 4.0 technologies. It highlights the importance of collaboration between government, academic and industry partners, as well as makes key recommendations on how to encourage adoption of IR4.0 technologies in the short- and long-term. This book bridges the concepts and applications of digitalization and Industry 4.0 and will be a must-read for policy makers seeking to quicken the adoption of its technologies

    Natural and Technological Hazards in Urban Areas

    Get PDF
    Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events

    AI Imagery and the Overton Window

    Full text link
    AI-based text-to-image generation has undergone a significant leap in the production of visually comprehensive and aesthetic imagery over the past year, to the point where differentiating between a man-made piece of art and an AI-generated image is becoming more difficult. Generative Models such as Stable Diffusion, Midjourney and others are expected to affect several major industries in technological and ethical aspects. Striking the balance between raising human standard of life and work vs exploiting one group of people to enrich another is a complex and crucial part of the discussion. Due to the rapid growth of this technology, the way in which its models operate, and gray area legalities, visual and artistic domains - including the video game industry, are at risk of being taken over from creators by AI infrastructure owners. This paper is a literature review examining the concerns facing both AI developers and users today, including identity theft, data laundering and more. It discusses legalization challenges and ethical concerns, and concludes with how AI generative models can be tremendously useful in streamlining the process of visual creativity in both static and interactive media given proper regulation. Keywords: AI text-to-image generation, Midjourney, Stable Diffusion, AI Ethics, Game Design, Digital Art, Data LaunderingComment: 18 pages, 18 figures, awaiting peer review. Due to the rapidly-evolving nature of text-to-image generation models, this literature review includes some references to time-sensitive elements such as news articles, legal reports and other non-academic document

    AI-based design methodologies for hot form quench (HFQ®)

    Get PDF
    This thesis aims to develop advanced design methodologies that fully exploit the capabilities of the Hot Form Quench (HFQ®) stamping process in stamping complex geometric features in high-strength aluminium alloy structural components. While previous research has focused on material models for FE simulations, these simulations are not suitable for early-phase design due to their high computational cost and expertise requirements. This project has two main objectives: first, to develop design guidelines for the early-stage design phase; and second, to create a machine learning-based platform that can optimise 3D geometries under hot stamping constraints, for both early and late-stage design. With these methodologies, the aim is to facilitate the incorporation of HFQ capabilities into component geometry design, enabling the full realisation of its benefits. To achieve the objectives of this project, two main efforts were undertaken. Firstly, the analysis of aluminium alloys for stamping deep corners was simplified by identifying the effects of corner geometry and material characteristics on post-form thinning distribution. New equation sets were proposed to model trends and design maps were created to guide component design at early stages. Secondly, a platform was developed to optimise 3D geometries for stamping, using deep learning technologies to incorporate manufacturing capabilities. This platform combined two neural networks: a geometry generator based on Signed Distance Functions (SDFs), and an image-based manufacturability surrogate model. The platform used gradient-based techniques to update the inputs to the geometry generator based on the surrogate model's manufacturability information. The effectiveness of the platform was demonstrated on two geometry classes, Corners and Bulkheads, with five case studies conducted to optimise under post-stamped thinning constraints. Results showed that the platform allowed for free morphing of complex geometries, leading to significant improvements in component quality. The research outcomes represent a significant contribution to the field of technologically advanced manufacturing methods and offer promising avenues for future research. The developed methodologies provide practical solutions for designers to identify optimal component geometries, ensuring manufacturing feasibility and reducing design development time and costs. The potential applications of these methodologies extend to real-world industrial settings and can significantly contribute to the continued advancement of the manufacturing sector.Open Acces
    • …
    corecore