14,243 research outputs found
Weak signal identification with semantic web mining
We investigate an automated identification of weak signals according to Ansoff to improve strategic planning and technological forecasting. Literature shows that weak signals can be found in the organization's environment and that they appear in different contexts. We use internet information to represent organization's environment and we select these websites that are related to a given hypothesis. In contrast to related research, a methodology is provided that uses latent semantic indexing (LSI) for the identification of weak signals. This improves existing knowledge based approaches because LSI considers the aspects of meaning and thus, it is able to identify similar textual patterns in different contexts. A new weak signal maximization approach is introduced that replaces the commonly used prediction modeling approach in LSI. It enables to calculate the largest number of relevant weak signals represented by singular value decomposition (SVD) dimensions. A case study identifies and analyses weak signals to predict trends in the field of on-site medical oxygen production. This supports the planning of research and development (R&D) for a medical oxygen supplier. As a result, it is shown that the proposed methodology enables organizations to identify weak signals from the internet for a given hypothesis. This helps strategic planners to react ahead of time
IRS II: a framework and infrastructure for semantic web services
In this paper we describe IRS–II (Internet Reasoning Service) a framework and implemented infrastructure, whose main goal is to support the publication, location, composition and execution of heterogeneous web services, augmented with semantic descriptions of their functionalities. IRS–II has three main classes of features which distinguish it from other work on semantic web services. Firstly, it supports one-click publishing of standalone software: IRS–II automatically creates the appropriate wrappers, given pointers to the standalone code. Secondly, it explicitly distinguishes between tasks (what to do) and methods (how to achieve tasks) and as a result supports capability-driven service invocation; flexible mappings between services and problem specifications; and dynamic, knowledge-based service selection. Finally, IRS–II services are web service compatible – standard web services can be trivially published through the IRS–II and any IRS–II service automatically appears as a standard web service to other web service infrastructures. In the paper we illustrate the main functionalities of IRS–II through a scenario involving a distributed application in the healthcare domain
Ranking Significant Discrepancies in Clinical Reports
Medical errors are a major public health concern and a leading cause of death
worldwide. Many healthcare centers and hospitals use reporting systems where
medical practitioners write a preliminary medical report and the report is
later reviewed, revised, and finalized by a more experienced physician. The
revisions range from stylistic to corrections of critical errors or
misinterpretations of the case. Due to the large quantity of reports written
daily, it is often difficult to manually and thoroughly review all the
finalized reports to find such errors and learn from them. To address this
challenge, we propose a novel ranking approach, consisting of textual and
ontological overlaps between the preliminary and final versions of reports. The
approach learns to rank the reports based on the degree of discrepancy between
the versions. This allows medical practitioners to easily identify and learn
from the reports in which their interpretation most substantially differed from
that of the attending physician (who finalized the report). This is a crucial
step towards uncovering potential errors and helping medical practitioners to
learn from such errors, thus improving patient-care in the long run. We
evaluate our model on a dataset of radiology reports and show that our approach
outperforms both previously-proposed approaches and more recent language models
by 4.5% to 15.4%.Comment: ECIR 2020 (short
National Mesothelioma Virtual Bank: A standard based biospecimen and clinical data resource to enhance translational research
Background: Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. Methods: The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid™ (caBIG™, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. Result: The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. Conclusion: The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers. © 2008 Amin et al; licensee BioMed Central Ltd
DEVELOPMENT OF A MEDICAL STAFF RECRUITMENT SYSTEM FOR TEACHING HOSPITALS IN NIGERIA
Recruitment of staff into teaching hospitals in Nigeria, acts as the first step towards creating competitive strength and strategic advantage for such institutions. However, one of the major problems associated with these institutions in the South Western part of Nigeria is their mode of staff recruitment. In this research paper, we developed a suitable staff recruitment system for some health institutions in Nigeria, focusing specifically on some teaching hospitals. Three teaching hospitals in south west Nigeria, were visited and relevant information was collated through personal interviews and questionnaires administration to the staff of Human Resource Departments and other relevant health professionals of these teaching hospitals. The design and development of the system employs 3-tier web architecture. System design of the staff recruitment system consisted of design activities that produce system specifications satisfying the functional requirements that were developed in the system analysis process. A formal model of the staff recruitment system was built using Unified Modeling Language (UML). The UML, as a modeling system, which provides a set of conventions that were used to describe the software system in terms of objects, offers diagrams that provide different perspective views of the system parts. The Web-based Medical Recruitment System (WBMRS) was designed to be user friendly and it is easy to navigate
- …