18 research outputs found

    A Comparative Study of Bluetooth SPP, PAN and GOEP for Efficient Exchange of Healthcare Data

    Get PDF
    Objectives: Current research aims to address the challenges of exchanging healthcare information, since when this information has to be shared, this happens by specifically designed medical applications or even by the patients themselves. Among the problems that the Health Information Exchange (HIE) initiative is facing are that (i) third party health data cannot be accessed without internet, (ii) there exist crucial delays in accessing citizens’ data, (iii) the direct HIE can only happen among Healthcare Institutions. Methods: Towards the solution of these issues, a Device-to-Device (D2D) protocol has been specified, running on top of the Bluetooth protocol for efficient data exchange. This research is focused on this D2D protocol, by comparing the different Bluetooth profiles that can be used for transmitting this data, based on specific metrics considering the probabilities of transferring erroneous data. Findings: An evaluation of three Bluetooth profiles takes place, concluding that two of the three profiles must be used to respect the D2D protocol nature and be fully supported by the main market vendors’ operating systems. Novelty:Based on this evaluation, the specified D2D protocol has been built on top of state-of-the-art short-range distance communication technologies, fully supporting the healthcare ecosystem towards the HIE paradigm. Doi: 10.28991/esj-2021-01276 Full Text: PD

    A Comparative Study of Collaborative Filtering in Product Recommendation

    Get PDF
    Product recommendation is considered a well-known technique for bringing customers and products together. With applications in music, electronic shops, or almost any platform the user daily deals with, the recommendation system’s sole scope is to help customers and attract new ones to discover new products. Through product recommendation, transaction costs can also be decreased, improving overall decision-making and quality. To perform recommendations, a recommendation system must utilize customer feedback, such as habits, interests, prior transactions as well as information used in customer profiling, and finally deliver suggestions. Hence, data is the key factor in choosing the appropriate recommendation method and drawing specific suggestions. This research investigates the data challenges of recommendation systems, specifying collaborative-based, content-based, and hybrid-based recommendations. In this context, collaborative filtering is being explored, with the Surprise library and LightFM embeddings being analysed and compared on top of foodservice transactional data. The involved algorithms’ metrics are being identified and parameterized, while hyperparameters are being tuned properly on top of this transactional data, concluding that LightFM provides more efficient recommendation results following the evaluation’s precision and recall outcomes. Nevertheless, even though the Surprise library outperforms, it should be used when constructing user-friendly models, requiring low code and low technicalities. Doi: 10.28991/ESJ-2023-07-01-01 Full Text: PD

    5G & SLAs: Automated proposition and management of agreements towards QoS enforcement

    Get PDF
    Efficient Service Level Agreements (SLA) management and anticipation of Service Level Objectives (SLO) breaches become mandatory to guarantee the required service quality in software- defined and 5G networks. To create an operational Network Service, it is highly envisaged to associate it with their network-related parameters that reflect the corresponding quality levels. These are included in policies but while SLAs target usually business users, there is a challenge for mechanisms that bridge this abstraction gap. In this paper, a generic black box approach is used to map high-level requirements expressed by users in SLAs to low-level network parameters included in policies, enabling Quality of Service (QoS) enforcement by triggering the required policies and manage the infrastructure accordingly. In addition, a mechanism for determining the importance of different QoS parameters is presented, mainly used for “relevant” QoS metrics recommendation in the SLA template

    Batch and Streaming Data Ingestion towards Creating Holistic Health Records

    Get PDF
    The healthcare sector has been moving toward Electronic Health Record (EHR) systems that produce enormous amounts of healthcare data due to the increased emphasis on getting the appropriate information to the right person, wherever they are, at any time. This highlights the need for a holistic approach to ingest, exploit, and manage these huge amounts of data for achieving better health management and promotion in general. This manuscript proposes such an approach, providing a mechanism allowing all health ecosystem entities to obtain actionable knowledge from heterogeneous data in a multimodal way. The mechanism includes diverse techniques for automatically ingesting healthcare-related information from heterogeneous sources that produce batch/streaming data, managing, fusing, and aggregating this data into new data structures (i.e., Holistic Health Records (HHRs)). The latter enable the aggregation of data coming from different sources, such as Internet of Medical Things (IoMT) devices, online/offline platforms, while to effectively construct the HHRs, the mechanism develops various data management techniques covering the overall data path, from data acquisition and cleaning to data integration, modelling, and interpretation. The mechanism has been evaluated upon different healthcare scenarios, ranging from hospital-retrieved data to patient platforms, combined with data obtained from IoMT devices, having produced useful insights towards its successful and wide adaptation in this domain. In order to implement a paradigm shift from heterogeneous and independent data sources, limited data exploitation, and health records, the mechanism has combined multidisciplinary technologies toward HHRs. Doi: 10.28991/ESJ-2023-07-02-03 Full Text: PD

    Internet of Medical Things (IoMT): Acquiring and Transforming Data into HL7 FHIR through 5G Network Slicing

    Get PDF
    The Healthcare 4.0 era is surrounded by challenges varying from the Internet of Medical Things (IoMT) devices’ data collection, integration and interpretation. Several techniques have been developed that however do not propose solutions that can be applied to different scenarios or domains. When dealing with healthcare data, based on the severity and the application of their results, they should be provided almost in real-time, without any errors, inconsistencies or misunderstandings. Henceforth, in this manuscript a platform is proposed for efficiently managing healthcare data, by taking advantage of the latest techniques in Data Acquisition, 5G Network Slicing and Data Interoperability. In this platform, IoMT devices’ data and network specifications can be acquired and segmented in different 5G network slices according to the severity and the computation requirements of different medical scenarios. In sequel, transformations are performed on the data of each network slice to address data heterogeneity issues, and provide the data of the same network slices into HL7 FHIR-compliant format, for further analysis

    Incremental data collection and analysis applied to the management and control of health cyber physical systems

    No full text
    The internet represents a digital space where big amounts of information, services, and data are added and exchanged daily, influencing and transforming the way in which people interact and communicate with each other. The great capabilities of the internet in conjunction with the relevant developments in the speed of the data recording and retrieval, have enabled the creation of new intelligent systems that offer increased efficiency, productivity, security and speed. To this context, the Cyber-Physical Systems (CPS) have developed, where the digital and the physical systems are now able to communicate directly with each other by recording the information contained in physical devices, ensuring their safe, efficient and intelligent operation. In order to achieve that, CPS utilize their different layers of sensors/actuators, communication, and applications so as to be able to cope with the emerging needs and requirements. This layer of communication is also the point where the combination of the CPS technologies and the Internet of Things (IoT) technologies emerges. IoT is a very powerful part of CPS, which is growing at a rapid pace and includes a variety of devices (e.g. smartphones, tablets, portable devices, sensors, cameras, etc.). It is reported that out of the 19.4 billion connected devices in 2019, 8 billion of them are referred to IoT connected devices, a number that is expected to rapidly increase over the next few years, producing large volumes of data. This fact strengthens the vision for developing new communication technologies and finding new ways in order to be able to synchronize all these huge amounts of the existing devices. However, this vision is accompanied by a number of different challenges, such as the fact that all these devices produce data that often do not fit to the nature of the platforms that collect them, thus being unable to be successfully analyzed by these platforms, in order to produce the acquired useful knowledge and insights. Consequently, there is a great need to develop new solutions for the adaptive selection, management and analysis of all of this produced and existing data that is produced by the heterogeneous device.However, in order to address this challenge, it is assumed that all this data has been successfully collected by the existing underlying heterogeneous devices. Therefore, addressing the challenge of integrating devices and collecting their data needs to be resolved before tackling the difficult task of successfully analyzing and exploiting the data. IoT, however, is limited to monitoring and controlling all existing enormous quantities of existing devices, as emerging support technologies that need to support these devices, have to predict not only this dramatic increase in connected devices but also their heterogeneity since they have different specifications, capabilities and functions. Thus, it is necessary to adopt new technologies for automatically recognizing and understanding the nature of all the existing IoT devices in order to become feasible to automatically collect and analyze the data that is generated by these devices. However, existing management and analysis techniques are able neither to cope with the complexity of the devices nor to recognize the nature of these devices, as they are both static and sensitive to new or changing changes of the functions and the requirements of the existing devices. Henceforth, what is needed is a generalized automated approach that will be able to connect and integrate heterogeneous devices, provided as a basis for efficiently retrieving data from all the underlying heterogeneous devices. In addition, since IoT devices are typically characterized by a high degree of heterogeneity, they are recognized as reliable at different levels, thus providing data of different levels of reliability. Therefore, the research challenge that emerges is on the one hand the difficulty of managing all these huge amounts of heterogeneous devices that include their own specifications and interfaces, and on the other hand the need for the devices themselves and the data that is produced by them to be fully reliable. Towards this direction, it is needed both an automated effective way for connecting and integrating the heterogeneous devices into different IoT platforms so as to collect their data, and an automated effective way for measuring the reliability of these devices in combination with the reliability of their produced data.In order to effectively address all of the aforementioned challenges, the current Ph.D. thesis focuses on them, having as a primary scope to study, design and implement a model of integrating data that is received from all the different autonomous and heterogeneous IoT devices, of both known and unknown nature. For this purpose, a general plug’n’play devices approach is proposed for the automatic management of heterogeneous IoT devices, estimating their levels of reliability, and finally collecting data only from the reliable and relevant ones to each platform. This approach is implemented through an innovative interoperable mechanism that can both connect to different IoT platforms and facilitate the automatic recognition, interaction and access to all underlying heterogeneous devices. In particular, the mechanism consists of the four (4) stages of the techniques of the device discovery and connection, the device type recognition, the device data collection, and the device and data quality estimation. Through this process, the proposed mechanism is able to achieve three (3) distinct goals. First of all, through this mechanism it becomes feasible to identify the nature of the devices that are available each time for connection to the corresponding platforms, thus giving the ability to these platforms to be connected only with the devices that are associated with them, in the sense that their data are relevant and useful to the nature of these platforms. Apart from this, the mechanism can automatically collect the data of the connected devices, thereby enabling the interoperability of each corresponding platform, without needing it to have further knowledge of the interfaces of these devices, and thus without needing to be parameterized in order to be able to collect the data of these devices. Finally, the mechanism is able to automatically evaluate the reliability of both the connected devices and their generated data, thereby enabling each corresponding platform to use and exploit only the reliable and qualified data that is produced by only trusted devices. As a result, it minimizes the risk of producing wrong results’ reports, and the potential handling error actions that could emerge under erroneous and incomplete data. The mechanism was evaluated through various experiments on different scenarios in the healthcare domain, producing quite reliable results, thus indicating its feasibility and applicability in this domain. However, this mechanism can be widely used and suited in many different domains, except of the healthcare domain, since it addresses the challenge of the optimal and efficient use of IoT devices regardless of their use in a specific application domain.Το διαδίκτυο αντιπροσωπεύει ένα ψηφιακό χώρο όπου μεγάλες ποσότητες πληροφοριών, υπηρεσιών και δεδομένων προστίθενται και ανταλλάσσονται καθημερινά, επηρεάζοντας και μεταμορφώνοντας τον τρόπο με τον οποίο οι άνθρωποι αλληλοεπιδρούν και επικοινωνούν. Οι μεγάλες δυνατότητες του διαδικτύου σε συνδυασμό με τις σχετικές εξελίξεις στην ταχύτητα καταχώρησης και ανάκτησης των δεδομένων, έχουν επιτρέψει τη δημιουργία νέων ευφυών συστημάτων που προσφέρουν αυξημένη αποτελεσματικότητα, παραγωγικότητα, ασφάλεια και ταχύτητα. Σε αυτό το πλαίσιο αναπτύσσονται τα Κυβερνοφυσικά Συστήματα (Cyber-Physical Systems, CPS), όπου τα ψηφιακά και τα φυσικά συστήματα μπορούν πλέον να επικοινωνούν άμεσα μεταξύ τους, καταγράφοντας τις πληροφορίες που εμπεριέχονται στις φυσικές συσκευές, εξασφαλίζοντας την ασφαλή, αποτελεσματική και ευφυή λειτουργία τους. Για να καταστεί αυτό εφικτό, τα CPS αξιοποιούν τα διαφορετικά επίπεδα των αισθητήρων/ενεργοποιητών, της επικοινωνίας, και των εφαρμογών που τα αποτελούν, ώστε να είναι ικανά να ανταπεξέλθουν στις εκάστοτε ανάγκες και απαιτήσεις. Το επίπεδο αυτό της επικοινωνίας είναι και το σημείο όπου προκύπτει ο συνδυασμός των τεχνολογιών των CPS με αυτές του Διαδικτύου των Αντικειμένων (Internet of Things, ΙοΤ). Το IoT αποτελεί ένα πολύ ισχυρό μέρος των CPS, το οποίο αυξάνεται με ταχύτατους ρυθμούς και περιλαμβάνει ποικίλες συσκευές (π.χ. smartphones, tablets, φορητές συσκευές, αισθητήρες, κάμερες, κλπ.). Χαρακτηριστικά αναφέρεται πως από τα 19,4 δισεκατομμύρια συνδεδεμένες συσκευές το 2019, τα 8 δισεκατομμύρια αναφέρονται σε IoT συνδεδεμένες συσκευές, ένας αριθμός που αναμένεται να αυξηθεί κατά πολύ μέσα στα επόμενα χρόνια, παράγοντας μεγάλους όγκους δεδομένων. Αυτό το γεγονός ενισχύει το όραμα για την ανάπτυξη νέων τεχνολογιών επικοινωνίας και την εξεύρεση νέων τρόπων συγχρονισμού όλων των συσκευών αυτών. Ωστόσο, το όραμα αυτό συνοδεύεται από διάφορες συναφείς προκλήσεις, όπως το γεγονός ότι όλα αυτά τα δεδομένα συχνά δεν ταιριάζουν στη φύση των πλατφόρμων που τα συλλέγουν, με αποτέλεσμα να μην είναι σε θέση να αναλυθούν με επιτυχία και να παράγουν χρήσιμες γνώσεις από τις πλατφόρμες αυτές. Ως αποτέλεσμα αυτού, προκύπτει η ανάγκη κατασκευής νέων λύσεων για την προσαρμοζόμενη επιλογή, διαχείριση και ανάλυση όλων αυτών των υφιστάμενων δεδομένων που παράγονται από τις ετερογενείς συσκευές.Ωστόσο, προκειμένου να επιτευχθεί η αντιμετώπιση αυτής της πρόκλησης, προϋποτίθεται πως τα δεδομένα αυτά θα έχουν συλλεχθεί με επιτυχία από τις εκάστοτε ετερογενείς συσκευές. Για το λόγο αυτό, η αντιμετώπιση της πρόκλησης της ενσωμάτωσης των συσκευών και της συλλογής των δεδομένων τους πρέπει να επιλυθεί πριν αντιμετωπιστεί το δύσκολο και απαιτητικό έργο της επιτυχούς ανάλυσης και εκμετάλλευσης των δεδομένων. Το IoT, ωστόσο, περιορίζεται στην παρακολούθηση και στον έλεγχο των μεγάλων ποσοτήτων υφιστάμενων συσκευών, δεδομένου ότι οι αναδυόμενες τεχνολογίες υποστήριξης των συσκευών αυτών, πρέπει να προβλέπουν όχι μόνο τη δραματική αύξηση του αριθμού τους, αλλά και την ετερογένειά τους όσον αφορά τις διαφορετικές προδιαγραφές, δυνατότητες και λειτουργίες που διαθέτουν. Επομένως, καθίσταται αναγκαία η υιοθέτηση νέων τεχνολογιών για την αυτόματη αναγνώριση και κατανόηση της φύσης των υφιστάμενων IoT συσκευών, έτσι ώστε να γίνεται εφικτή η αυτόματη συλλογή και ανάλυση των δεδομένων που παράγονται από τις συσκευές αυτές. Οι υπάρχουσες τεχνικές διαχείρισης και ανάλυσης δεν είναι όμως σε θέση ούτε να αντιμετωπίσουν την πολυπλοκότητα των συσκευών, αλλά ούτε να αναγνωρίσουν τη φύση των συσκευών αυτών, καθώς είναι τόσο στατικές όσο και ευαίσθητες σε νέες ή μεταβαλλόμενες αλλαγές που πραγματοποιούνται στις λειτουργίες και στις απαιτήσεις των υπαρχουσών συσκευών. Αυτό που απαιτείται είναι μία γενικευμένη αυτοματοποιημένη προσέγγιση για τη σύνδεση και την ενσωμάτωση ετερογενών συσκευών, ως μία βάση για την αποδοτική ανάκτηση δεδομένων από όλες τις υποκείμενες συσκευές. Παράλληλα, δεδομένου ότι οι IoT συσκευές χαρακτηρίζονται συνήθως από υψηλό βαθμό ετερογένειας, είναι αναμενόμενο να αναγνωρίζονται ως αξιόπιστες σε διαφορετικές βαθμίδες, παρέχοντας έτσι δεδομένα διαφορετικών επιπέδων αξιοπιστίας. Επομένως, η ερευνητική πρόκληση που συναντάται αναφέρεται αφενός στη δυσκολία διαχείρισης του μεγάλου αριθμού ετερογενών συσκευών που παρουσιάζουν συγκεκριμένες προδιαγραφές και διεπαφές, και αφετέρου στην αναγκαιότητα των ίδιων των συσκευών και των δεδομένων που παράγονται από αυτές να είναι πλήρους αξιοπιστίας. Προς αυτή την κατεύθυνση, απαιτείται τόσο ένας αυτοματοποιημένος και αποτελεσματικός τρόπος σύνδεσης και ενσωμάτωσης των ετερογενών συσκευών στις διαφορετικές IoT πλατφόρμες ώστε να συλλέγονται τα δεδομένα τους, όσο και ένας τρόπος εκτίμησης της αξιοπιστίας των συσκευών σε συνδυασμό με την αξιοπιστία των παραγόμενων δεδομένων τους. Για να αντιμετωπιστούν αποτελεσματικά όλες οι προαναφερθείσες προκλήσεις, η παρούσα Διδακτορική Διατριβή επικεντρώνεται στοχευμένα σε αυτές, έχοντας ως πρωταρχικό σκοπό τη μελέτη, το σχεδιασμό και την υλοποίηση ενός μοντέλου ενσωμάτωσης δεδομένων από διαφορετικές αυτόνομες και ετερογενείς IoT συσκευές, τόσο γνωστού όσο και άγνωστου τύπου. Για το σκοπό αυτό, προτείνεται μια γενική προσέγγιση ενσωμάτωσης ετερογενών IoT συσκευών για την αυτόματη διαχείρισή τους, εκτιμώντας τα επίπεδα αξιοπιστίας τους, και συλλέγοντας εν τέλει δεδομένα μόνο από τις αξιόπιστες και σχετικές προς κάθε πλατφόρμα συσκευές. Αυτή η προσέγγιση μελετήθηκε και προτάθηκε μέσω ενός καινοτόμου διαλειτουργικού μηχανισμού που έχει τη δυνατότητα αφενός να συνδέεται σε διαφορετικές IoT πλατφόρμες, ανεξάρτητα από τη φύση και τη μορφή των δεδομένων που μπορούν αυτές να χειριστούν, και αφετέρου να διευκολύνει την αυτόματη αναγνώριση, αλληλεπίδραση και πρόσβαση σε όλες τις υποκείμενες ετερογενείς συσκευές. Ειδικότερα, ο μηχανισμός αποτελείται από τα τέσσερα (4) στάδια των τεχνικών της εύρεσης και σύνδεσης συσκευών, της αναγνώρισης του τύπου των συσκευών, της συλλογής των δεδομένων των συσκευών, και της εκτίμησης της αξιοπιστίας των συσκευών και των δεδομένων αυτών. Μέσω της διαδικασίας αυτής, η προτεινόμενη προσέγγιση εκπληρώνει τρεις (3) βασικούς στόχους. Αρχικά, επιτυγχάνει την αναγνώριση της φύσης των συσκευών που είναι κάθε φορά διαθέσιμες για σύνδεση στις εκάστοτε πλατφόρμες, επιτρέποντας τη σύνδεση μόνο των συσκευών που σχετίζονται με τις πλατφόρμες αυτές, με την έννοια ότι τα δεδομένα τους είναι σχετικά και χρήσιμα προς αυτές. Επιπλέον, επιτυγχάνει την αυτόματη συλλογή των δεδομένων των συνδεδεμένων συσκευών, παρέχοντας έτσι τη δυνατότητα της διαλειτουργικότητας στην εκάστοτε πλατφόρμα, χωρίς αυτή να χρειάζεται αφενός να διαθέτει περαιτέρω γνώση για τις διεπαφές των συσκευών αυτών, και αφετέρου να παραμετροποιηθεί ώστε να είναι σε θέση να συλλέξει τα δεδομένα τους. Τέλος, επιτυγχάνει την αυτόματη εκτίμηση της αξιοπιστίας τόσο των συνδεδεμένων συσκευών όσο και των παραγόμενων δεδομένων τους, καταφέρνοντας η εκάστοτε πλατφόρμα να χρησιμοποιεί και να εκμεταλλεύεται μόνο τα αξιόπιστα και ποιοτικά δεδομένα που προέρχονται από αποκλειστικά αξιόπιστες συσκευές. Βάσει αυτού ελαχιστοποιείται ο κίνδυνος εξαγωγής εσφαλμένων αποτελεσμάτων, και η διεκπεραίωση εσφαλμένων ενεργειών λόγω λανθασμένων και ελλιπών δεδομένων. Η εν λόγω προσέγγιση αξιολογήθηκε μέσω ποικίλων πειραμάτων σε διαφορετικά σενάρια (με έμφαση στον τομέα της υγείας), παράγοντας αξιόπιστα αποτελέσματα, αποδεικνύοντας τη σκοπιμότητα και την εφαρμογή του. Ωστόσο, η προσέγγιση μπορεί να χρησιμοποιηθεί ευρέως σε επιπλέον τομείς, εκτός του τομέα της υγείας, καθώς απαντά στην πρόκληση της βέλτιστης και αποδοτικής αξιοποίησης των IoT συσκευών ανεξάρτητα από τη χρήση τους σε κάποιον συγκεκριμένο τομέα εφαρμογής

    EverAnalyzer: A Self-Adjustable Big Data Management Platform Exploiting the Hadoop Ecosystem

    No full text
    Big Data is a phenomenon that affects today’s world, with new data being generated every second. Today’s enterprises face major challenges from the increasingly diverse data, as well as from indexing, searching, and analyzing such enormous amounts of data. In this context, several frameworks and libraries for processing and analyzing Big Data exist. Among those frameworks Hadoop MapReduce, Mahout, Spark, and MLlib appear to be the most popular, although it is unclear which of them best suits and performs in various data processing and analysis scenarios. This paper proposes EverAnalyzer, a self-adjustable Big Data management platform built to fill this gap by exploiting all of these frameworks. The platform is able to collect data both in a streaming and in a batch manner, utilizing the metadata obtained from its users’ processing and analytical processes applied to the collected data. Based on this metadata, the platform recommends the optimum framework for the data processing/analytical activities that the users aim to execute. To verify the platform’s efficiency, numerous experiments were carried out using 30 diverse datasets related to various diseases. The results revealed that EverAnalyzer correctly suggested the optimum framework in 80% of the cases, indicating that the platform made the best selections in the majority of the experiments

    EverAnalyzer: A Self-Adjustable Big Data Management Platform Exploiting the Hadoop Ecosystem

    No full text
    Big Data is a phenomenon that affects today’s world, with new data being generated every second. Today’s enterprises face major challenges from the increasingly diverse data, as well as from indexing, searching, and analyzing such enormous amounts of data. In this context, several frameworks and libraries for processing and analyzing Big Data exist. Among those frameworks Hadoop MapReduce, Mahout, Spark, and MLlib appear to be the most popular, although it is unclear which of them best suits and performs in various data processing and analysis scenarios. This paper proposes EverAnalyzer, a self-adjustable Big Data management platform built to fill this gap by exploiting all of these frameworks. The platform is able to collect data both in a streaming and in a batch manner, utilizing the metadata obtained from its users’ processing and analytical processes applied to the collected data. Based on this metadata, the platform recommends the optimum framework for the data processing/analytical activities that the users aim to execute. To verify the platform’s efficiency, numerous experiments were carried out using 30 diverse datasets related to various diseases. The results revealed that EverAnalyzer correctly suggested the optimum framework in 80% of the cases, indicating that the platform made the best selections in the majority of the experiments

    IoT in Healthcare: Achieving Interoperability of High-Quality Data Acquired by IoT Medical Devices

    No full text
    It is an undeniable fact that Internet of Things (IoT) technologies have become a milestone advancement in the digital healthcare domain, since the number of IoT medical devices is grown exponentially, and it is now anticipated that by 2020 there will be over 161 million of them connected worldwide. Therefore, in an era of continuous growth, IoT healthcare faces various challenges, such as the collection, the quality estimation, as well as the interpretation and the harmonization of the data that derive from the existing huge amounts of heterogeneous IoT medical devices. Even though various approaches have been developed so far for solving each one of these challenges, none of these proposes a holistic approach for successfully achieving data interoperability between high-quality data that derive from heterogeneous devices. For that reason, in this manuscript a mechanism is produced for effectively addressing the intersection of these challenges. Through this mechanism, initially, the collection of the different devices’ datasets occurs, followed by the cleaning of them. In sequel, the produced cleaning results are used in order to capture the levels of the overall data quality of each dataset, in combination with the measurements of the availability of each device that produced each dataset, and the reliability of it. Consequently, only the high-quality data is kept and translated into a common format, being able to be used for further utilization. The proposed mechanism is evaluated through a specific scenario, producing reliable results, achieving data interoperability of 100% accuracy, and data quality of more than 90% accuracy

    HealthFetch: An Influence-Based, Context-Aware Prefetch Scheme in Citizen-Centered Health Storage Clouds

    No full text
    Over the past few years, increasing attention has been given to the health sector and the integration of new technologies into it. Cloud computing and storage clouds have become essentially state of the art solutions for other major areas and have started to rapidly make their presence powerful in the health sector as well. More and more companies are working toward a future that will allow healthcare professionals to engage more with such infrastructures, enabling them a vast number of possibilities. While this is a very important step, less attention has been given to the citizens. For this reason, in this paper, a citizen-centered storage cloud solution is proposed that will allow citizens to hold their health data in their own hands while also enabling the exchange of these data with healthcare professionals during emergency situations. Not only that, in order to reduce the health data transmission delay, a novel context-aware prefetch engine enriched with deep learning capabilities is proposed. The proposed prefetch scheme, along with the proposed storage cloud, is put under a two-fold evaluation in several deployment and usage scenarios in order to examine its performance with respect to the data transmission times, while also evaluating its outcomes compared to other state of the art solutions. The results show that the proposed solution shows significant improvement of the download speed when compared with the storage cloud, especially when large data are exchanged. In addition, the results of the proposed scheme evaluation depict that the proposed scheme improves the overall predictions, considering the coefficient of determination (R2 > 0.94) and the mean of errors (RMSE < 1), while also reducing the training data by 12%
    corecore