11,113 research outputs found
Design Challenges for GDPR RegTech
The Accountability Principle of the GDPR requires that an organisation can
demonstrate compliance with the regulations. A survey of GDPR compliance
software solutions shows significant gaps in their ability to demonstrate
compliance. In contrast, RegTech has recently brought great success to
financial compliance, resulting in reduced risk, cost saving and enhanced
financial regulatory compliance. It is shown that many GDPR solutions lack
interoperability features such as standard APIs, meta-data or reports and they
are not supported by published methodologies or evidence to support their
validity or even utility. A proof of concept prototype was explored using a
regulator based self-assessment checklist to establish if RegTech best practice
could improve the demonstration of GDPR compliance. The application of a
RegTech approach provides opportunities for demonstrable and validated GDPR
compliance, notwithstanding the risk reductions and cost savings that RegTech
can deliver. This paper demonstrates a RegTech approach to GDPR compliance can
facilitate an organisation meeting its accountability obligations
BIM adoption and implementation for architectural practices
Severe issues about data acquisition and management arise during the design creation and development due to complexity, uncertainty and ambiguity. BIM (Building Information Modelling) is a tool for a team based lean design approach towards improved architectural practice across the supply chain. However, moving from a CAD (Computer Aided Design) approach to BIM (Building Information Modelling) represents a fundamental change for individual disciplines and the construction industry as a whole. Although BIM has been implemented by large practices, it is not widely used by SMEs (Small and Medium Sized Enterprises).
Purpose: This paper aims to present a systematic approach for BIM implementation for Architectural SMEs at the organizational level
Design/Methodology/Approach: The research is undertaken through a KTP (Knowledge transfer Partnership) project between the University of Salford and John McCall Architects (JMA) a SME based in Liverpool. The overall aim of the KTP is to develop lean design practice through BIM adoption. The BIM implementation approach uses a socio-technical view which does not only consider the implementation of technology but also considers the socio-cultural environment that provides the context for its implementation. The action research oriented qualitative and quantitative research is used for discovery, comparison, and experimentation as it provides �learning by doing�.
Findings: The strategic approach to BIM adoption incorporated people, process and technology equally and led to capacity building through the improvements in process, technological infrastructure and upskilling of JMA staff to attain efficiency gains and competitive advantages.
Originality/Value: This paper introduces a systematic approach for BIM adoption based on the action research philosophy and demonstrates a roadmap for BIM adoption at the operational level for SME companie
Development of preliminary design concept for multifunction display and control system for Orbiter crew station. Task 3: Concept analysis
The access schema developed to access both individual switch functions as well as automated or semiautomated procedures for the orbital maneuvering system and electrical power and distribution and control system discussed and the operation of the system is described. Feasibility tests and analyses used to define display parameters and to select applicable hardware choices for use in such a system are presented and the results are discussed
Design for safety: theoretical framework of the safety aspect of BIM system to determine the safety index
Despite the safety improvement drive that has been implemented in the construction industry in Singapore for many years, the industry continues to report the highest number of workplace fatalities, compared to other industries. The purpose of this paper is to discuss the theoretical framework of the safety aspect of a proposed BIM System to determine a Safety Index. An online questionnaire survey was conducted to ascertain the current workplace safety and health situation in the construction industry and explore how BIM can be used to improve safety performance in the industry. A safety hazard library was developed based on the main contributors to fatal accidents in the construction industry, determined from the formal records and existing literature, and a series of discussions with representatives from the Workplace Safety and Health Institute (WSH Institute) in Singapore. The results from the survey suggested that the majority of the firms have implemented the necessary policies, programmes and procedures on Workplace Safety and Health (WSH) practices. However, BIM is still not widely applied or explored beyond the mandatory requirement that building plans should be submitted to the authorities for approval in BIM format. This paper presents a discussion of the safety aspect of the Intelligent Productivity and Safety System (IPASS) developed in the study. IPASS is an intelligent system incorporating the buildable design concept, theory on the detection, prevention and control of hazards, and the Construction Safety Audit Scoring System (ConSASS). The system is based on the premise that safety should be considered at the design stage, and BIM can be an effective tool to facilitate the efforts to enhance safety performance. IPASS allows users to analyse and monitor key aspects of the safety performance of the project before the project starts and as the project progresses
Using Informatics to Improve Autism Screening in a Pediatric Primary Care Practice
Background: According to the most recent report from the CDC (2018), autism spectrum disorder (ASD) affects approximately one in 59 children in the United States (U.S.). In 2007, the American Academy of Pediatrics (AAP) issued a strong recommendation for all primary care providers to screen children for autism, using a validated tool, at the 18 and 24-month well-child visits, in order to begin the referral process for more formal testing, and intervention, promptly. Despite the strong stance of the AAP and evidence supporting the importance of early intervention for children with ASD, not all primary care providers are screening for ASD or developmental delay.
Purpose: To improve the percentage of eligible children, presenting for 18 and 24 month wellchild visits in a pediatric primary care office, who are screened for ASD, by integrating the Modified Checklist for Autism in Toddlers (M-CHAT) screening tool into the electronic medical record with tablets. The specific aims were to increase the percentage of children screened and improve the documentation of the screens performed.
Methods: This quality improvement project utilized a before-after quantitative design to support the improvement. Reports were obtained for three months prior to the implementation of the tablets and process change, and again for three months following the implementation. Manual chart reviews were also performed to verify the data from the reports. The definition used for complete screening for this project included 1) presence of the completed screen in the medical record, 2) provider documentation of the result, interpretation, and plan if indicated, and 3) CPT code entry for charge capture completed in the electronic medical record.
Results: The results of the project revealed improvements in overall percentages of eligible children screened for autism at D-H Nashua Pediatrics. The percentage of complete screening increased from 64.7% to 73.9% following the implementation of the project, a change which is statistically significant (t=31.6105, df=16,p=0.05). Each individual element was also tracked and those results showed that 1) the completeness of provider documentation related to the screening increased from 93.6% to 96% (t=41.3321, df=16, p=0.05) and 2) the M-CHAT screen was present in the electronic health record (EHR) 98.9% of the time, which was an increase from 84.6% (t=295.4084, df=16, p=0.05). The charge capture completion rate remained statistically unchanged at 76.5% (t=0.4664, df=16, p=0.05). Additionally, only one screening was noted to be missed altogether, out of 280 eligible children. Prior to the project, there were four missed screenings (out of 156 eligible children) captured by the chart reviews conducted over three months prior to the implementation of the project. Overall, the results show that the project resulted in an increase the percentage of M-CHAT screening, an increase in the presence of source documentation in the electronic health record (EHR), and more complete provider documentation related to the screening
Data extraction methods for systematic review (semi)automation: Update of a living systematic review [version 2; peer review: 3 approved]
Background: The reliable and usable (semi)automation of data extraction can support the field of systematic review by reducing the workload required to gather information about the conduct and results of the included studies. This living systematic review examines published approaches for data extraction from reports of clinical studies.
Methods: We systematically and continually search PubMed, ACL Anthology, arXiv, OpenAlex via EPPI-Reviewer, and the dblp computer science bibliography. Full text screening and data extraction are conducted within an open-source living systematic review application created for the purpose of this review. This living review update includes publications up to December 2022 and OpenAlex content up to March 2023.
Results: 76 publications are included in this review. Of these, 64 (84%) of the publications addressed extraction of data from abstracts, while 19 (25%) used full texts. A total of 71 (93%) publications developed classifiers for randomised controlled trials. Over 30 entities were extracted, with PICOs (population, intervention, comparator, outcome) being the most frequently extracted. Data are available from 25 (33%), and code from 30 (39%) publications. Six (8%) implemented publicly available tools
Conclusions: This living systematic review presents an overview of (semi)automated data-extraction literature of interest to different types of literature review. We identified a broad evidence base of publications describing data extraction for interventional reviews and a small number of publications extracting epidemiological or diagnostic accuracy data. Between review updates, trends for sharing data and code increased strongly: in the base-review, data and code were available for 13 and 19% respectively, these numbers increased to 78 and 87% within the 23 new publications. Compared with the base-review, we observed another research trend, away from straightforward data extraction and towards additionally extracting relations between entities or automatic text summarisation. With this living review we aim to review the literature continually
PRISMA-DFLLM: An Extension of PRISMA for Systematic Literature Reviews using Domain-specific Finetuned Large Language Models
With the proliferation of open-sourced Large Language Models (LLMs) and
efficient finetuning techniques, we are on the cusp of the emergence of
numerous domain-specific LLMs that have been finetuned for expertise across
specialized fields and applications for which the current general-purpose LLMs
are unsuitable. In academia, this technology has the potential to revolutionize
the way we conduct systematic literature reviews (SLRs), access knowledge and
generate new insights. This paper proposes an AI-enabled methodological
framework that combines the power of LLMs with the rigorous reporting
guidelines of the Preferred Reporting Items for Systematic Reviews and
Meta-Analyses (PRISMA). By finetuning LLMs on domain-specific academic papers
that have been selected as a result of a rigorous SLR process, the proposed
PRISMA-DFLLM (for Domain-specific Finetuned LLMs) reporting guidelines offer
the potential to achieve greater efficiency, reusability and scalability, while
also opening the potential for conducting incremental living systematic reviews
with the aid of LLMs. Additionally, the proposed approach for leveraging LLMs
for SLRs enables the dissemination of finetuned models, empowering researchers
to accelerate advancements and democratize cutting-edge research. This paper
presents the case for the feasibility of finetuned LLMs to support rigorous
SLRs and the technical requirements for realizing this. This work then proposes
the extended PRISMA-DFLLM checklist of reporting guidelines as well as the
advantages, challenges, and potential implications of implementing
PRISMA-DFLLM. Finally, a future research roadmap to develop this line of
AI-enabled SLRs is presented, paving the way for a new era of evidence
synthesis and knowledge discovery
Data extraction methods for systematic review (semi)automation: A living systematic review [version 1; peer review: awaiting peer review]
Background: The reliable and usable (semi)automation of data
extraction can support the field of systematic review by reducing the
workload required to gather information about the conduct and
results of the included studies. This living systematic review examines
published approaches for data extraction from reports of clinical
studies.
Methods: We systematically and continually search MEDLINE,
Institute of Electrical and Electronics Engineers (IEEE), arXiv, and the
dblp computer science bibliography databases. Full text screening and
data extraction are conducted within an open-source living systematic
review application created for the purpose of this review. This
iteration of the living review includes publications up to a cut-off date
of 22 April 2020.
Results: In total, 53 publications are included in this version of our
review. Of these, 41 (77%) of the publications addressed extraction of
data from abstracts, while 14 (26%) used full texts. A total of 48 (90%)
publications developed and evaluated classifiers that used
randomised controlled trials as the main target texts. Over 30 entities
were extracted, with PICOs (population, intervention, comparator,
outcome) being the most frequently extracted. A description of their
datasets was provided by 49 publications (94%), but only seven (13%)
made the data publicly available. Code was made available by 10 (19%)
publications, and five (9%) implemented publicly available tools.
Conclusions: This living systematic review presents an overview of
(semi)automated data-extraction literature of interest to different
types of systematic review. We identified a broad evidence base of
publications describing data extraction for interventional reviews and
a small number of publications extracting epidemiological or diagnostic accuracy data. The lack of publicly available gold-standard
data for evaluation, and lack of application thereof, makes it difficult
to draw conclusions on which is the best-performing system for each
data extraction target. With this living review we aim to review the
literature continually
- …