180,762 research outputs found

    Security for networked smart healthcare systems: A systematic review

    Get PDF
    Background and Objectives Smart healthcare systems use technologies such as wearable devices, Internet of Medical Things and mobile internet technologies to dynamically access health information, connect patients to health professionals and health institutions, and to actively manage and respond intelligently to the medical ecosystem's needs. However, smart healthcare systems are affected by many challenges in their implementation and maintenance. Key among these are ensuring the security and privacy of patient health information. To address this challenge, several mitigation measures have been proposed and some have been implemented. Techniques that have been used include data encryption and biometric access. In addition, blockchain is an emerging security technology that is expected to address the security issues due to its distributed and decentralized architecture which is similar to that of smart healthcare systems. This study reviewed articles that identified security requirements and risks, proposed potential solutions, and explained the effectiveness of these solutions in addressing security problems in smart healthcare systems. Methods This review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines and was framed using the Problem, Intervention, Comparator, and Outcome (PICO) approach to investigate and analyse the concepts of interest. However, the comparator is not applicable because this review focuses on the security measures available and in this case no comparable solutions were considered since the concept of smart healthcare systems is an emerging one and there are therefore, no existing security solutions that have been used before. The search strategy involved the identification of studies from several databases including the Cumulative Index of Nursing and Allied Health Literature (CINAL), Scopus, PubMed, Web of Science, Medline, Excerpta Medical database (EMBASE), Ebscohost and the Cochrane Library for articles that focused on the security for smart healthcare systems. The selection process involved removing duplicate studies, and excluding studies after reading the titles, abstracts, and full texts. Studies whose records could not be retrieved using a predefined selection criterion for inclusion and exclusion were excluded. The remaining articles were then screened for eligibility. A data extraction form was used to capture details of the screened studies after reading the full text. Of the searched databases, only three yielded results when the search strategy was applied, i.e., Scopus, Web of science and Medline, giving a total of 1742 articles. 436 duplicate studies were removed. Of the remaining articles, 801 were excluded after reading the title, after which 342 after were excluded after reading the abstract, leaving 163, of which 4 studies could not be retrieved. 159 articles were therefore screened for eligibility after reading the full text. Of these, 14 studies were included for detailed review using the formulated research questions and the PICO framework. Each of the 14 included articles presented a description of a smart healthcare system and identified the security requirements, risks and solutions to mitigate the risks. Each article also summarized the effectiveness of the proposed security solution. Results The key security requirements reported were data confidentiality, integrity and availability of data within the system, with authorisation and authentication used to support these key security requirements. The identified security risks include loss of data confidentiality due to eavesdropping in wireless communication mediums, authentication vulnerabilities in user devices and storage servers, data fabrication and message modification attacks during transmission as well as while the data is at rest in databases and other storage devices. The proposed mitigation measures included the use of biometric accessing devices; data encryption for protecting the confidentiality and integrity of data; blockchain technology to address confidentiality, integrity, and availability of data; network slicing techniques to provide isolation of patient health data in 5G mobile systems; and multi-factor authentication when accessing IoT devices, servers, and other components of the smart healthcare systems. The effectiveness of the proposed solutions was demonstrated through their ability to provide a high level of data security in smart healthcare systems. For example, proposed encryption algorithms demonstrated better energy efficiency, and improved operational speed; reduced computational overhead, better scalability, efficiency in data processing, and better ease of deployment. Conclusion This systematic review has shown that the use of blockchain technology, biometrics (fingerprints), data encryption techniques, multifactor authentication and network slicing in the case of 5G smart healthcare systems has the potential to alleviate possible security risks in smart healthcare systems. The benefits of these solutions include a high level of security and privacy for Electronic Health Records (EHRs) systems; improved speed of data transaction without the need for a decentralized third party, enabled by the use of blockchain. However, the proposed solutions do not address data protection in cases where an intruder has already accessed the system. This may be potential avenues for further research and inquiry

    Flexible data input layer architecture (FDILA) for quick-response decision making tools in volatile manufacturing systems

    Get PDF
    This paper proposes the foundation for a flexible data input management system as a vital part of a generic solution for quick-response decision making. Lack of a comprehensive data input layer between data acquisition and processing systems has been realized and thought of. The proposed FDILA is applicable to a wide variety of volatile manufacturing environments. It provides a generic platform that enables systems designers to define any number of data entry points and types regardless of their make and specifications in a standard fashion. This is achieved by providing a variable definition layer immediately on top of the data acquisition layer and before data pre-processing layer. For proof of concept, National Instruments’ Labview data acquisition software is used to simulate a typical shop floor data acquisition system. The extracted data can then be fed into a data mining module that builds cost modeling functions involving the plant’s Key Performance Factors

    City Data Fusion: Sensor Data Fusion in the Internet of Things

    Full text link
    Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.Comment: Accepted to be published in International Journal of Distributed Systems and Technologies (IJDST), 201

    TechNews digests: Jan - Mar 2010

    Get PDF
    TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month

    The Most Influential Paper Gerard Salton Never Wrote

    Get PDF
    Gerard Salton is often credited with developing the vector space model (VSM) for information retrieval (IR). Citations to Salton give the impression that the VSM must have been articulated as an IR model sometime between 1970 and 1975. However, the VSM as it is understood today evolved over a longer time period than is usually acknowledged, and an articulation of the model and its assumptions did not appear in print until several years after those assumptions had been criticized and alternative models proposed. An often cited overview paper titled ???A Vector Space Model for Information Retrieval??? (alleged to have been published in 1975) does not exist, and citations to it represent a confusion of two 1975 articles, neither of which were overviews of the VSM as a model of information retrieval. Until the late 1970s, Salton did not present vector spaces as models of IR generally but rather as models of specifi c computations. Citations to the phantom paper refl ect an apparently widely held misconception that the operational features and explanatory devices now associated with the VSM must have been introduced at the same time it was fi rst proposed as an IR model.published or submitted for publicatio

    Japanese/English Cross-Language Information Retrieval: Exploration of Query Translation and Transliteration

    Full text link
    Cross-language information retrieval (CLIR), where queries and documents are in different languages, has of late become one of the major topics within the information retrieval community. This paper proposes a Japanese/English CLIR system, where we combine a query translation and retrieval modules. We currently target the retrieval of technical documents, and therefore the performance of our system is highly dependent on the quality of the translation of technical terms. However, the technical term translation is still problematic in that technical terms are often compound words, and thus new terms are progressively created by combining existing base words. In addition, Japanese often represents loanwords based on its special phonogram. Consequently, existing dictionaries find it difficult to achieve sufficient coverage. To counter the first problem, we produce a Japanese/English dictionary for base words, and translate compound words on a word-by-word basis. We also use a probabilistic method to resolve translation ambiguity. For the second problem, we use a transliteration method, which corresponds words unlisted in the base word dictionary to their phonetic equivalents in the target language. We evaluate our system using a test collection for CLIR, and show that both the compound word translation and transliteration methods improve the system performance
    • …
    corecore