4 research outputs found
Geographic information extraction from texts
A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
IoT Health Devices: Exploring Security Risks in the Connected Landscape
The concept of the Internet of Things (IoT) spans decades, and the same can be said for its inclusion in healthcare. The IoT is an attractive target in medicine; it offers considerable potential in expanding care. However, the application of the IoT in healthcare is fraught with an array of challenges, and also, through it, numerous vulnerabilities that translate to wider attack surfaces and deeper degrees of damage possible to both consumers and their confidence within health systems, as a result of patient-specific data being available to access. Further, when IoT health devices (IoTHDs) are developed, a diverse range of attacks are possible. To understand the risks in this new landscape, it is important to understand the architecture of IoTHDs, operations, and the social dynamics that may govern their interactions. This paper aims to document and create a map regarding IoTHDs, lay the groundwork for better understanding security risks in emerging IoTHD modalities through a multi-layer approach, and suggest means for improved governance and interaction. We also discuss technological innovations expected to set the stage for novel exploits leading into the middle and latter parts of the 21st century
A Novel Fragile Zero-Watermarking Algorithm for Digital Medical Images
The wireless transmission of patients’ particulars and medical data to a specialised centre after an initial screening at a remote health facility may cause potential threats to patients’ data privacy and integrity. Although watermarking can be used to rectify such risks, it should not degrade the medical data, because any change in the data characteristics may lead to a false diagnosis. Hence, zero watermarking can be helpful in these circumstances. At the same time, the transmitted data must create a warning in case of tampering or a malicious attack. Thus, watermarking should be fragile in nature. Consequently, a novel hybrid approach using fragile zero watermarking is proposed in this study. Visual cryptography and chaotic randomness are major components of the proposed algorithm to avoid any breach of information through an illegitimate attempt. The proposed algorithm is evaluated using two datasets: the Digital Database for Screening Mammography and the Mini Mammographic Image Analysis Society database. In addition, a breast cancer detection system using a convolutional neural network is implemented to analyse the diagnosis in case of a malicious attack and after watermark insertion. The experimental results indicate that the proposed algorithm is reliable for privacy protection and data authentication
Design and implementation of WCET analyses : including a case study on multi-core processors with shared buses
For safety-critical real-time embedded systems, the worst-case execution time (WCET) analysis — determining an upper bound on the possible execution times of a program — is an important part of the system verification. Multi-core processors share resources (e.g. buses and caches) between multiple processor cores and, thus, complicate the WCET analysis as the execution times of a program executed on one processor core significantly depend on the programs executed in parallel on the concurrent cores. We refer to this phenomenon as shared-resource interference. This thesis proposes a novel way of modeling shared-resource interference during WCET analysis. It enables an efficient analysis — as it only considers one processor core at a time — and it is sound for hardware platforms exhibiting timing anomalies. Moreover, this thesis demonstrates how to realize a timing-compositional verification on top of the proposed modeling scheme. In this way, this thesis closes the gap between modern hardware platforms, which exhibit timing anomalies, and existing schedulability analyses, which rely on timing compositionality. In addition, this thesis proposes a novel method for calculating an upper bound on the amount of interference that a given processor core can generate in any time interval of at most a given length. Our experiments demonstrate that the novel method is more precise than existing methods.Die Analyse der maximalen Ausführungszeit (Worst-Case-Execution-Time-Analyse, WCET-Analyse) ist für eingebettete Echtzeit-Computer-Systeme in sicherheitskritischen Anwendungsbereichen unerlässlich. Mehrkernprozessoren erschweren die WCET-Analyse, da einige ihrer Hardware-Komponenten von mehreren Prozessorkernen gemeinsam genutzt werden und die Ausführungszeit eines Programmes somit vom Verhalten mehrerer Kerne abhängt. Wir bezeichnen dies als Interferenz durch gemeinsam genutzte Komponenten. Die vorliegende Arbeit schlägt eine neuartige Modellierung dieser Interferenz während der WCET-Analyse vor. Der vorgestellte Ansatz ist effizient und führt auch für Computer-Systeme mit Zeitanomalien zu korrekten Ergebnissen. Darüber hinaus zeigt diese Arbeit, wie ein zeitkompositionales Verfahren auf Basis der vorgestellten Modellierung umgesetzt werden kann. Auf diese Weise schließt diese Arbeit die Lücke zwischen modernen Mikroarchitekturen, die Zeitanomalien aufweisen, und den existierenden Planbarkeitsanalysen, die sich alle auf die Kompositionalität des Zeitverhaltens verlassen. Außerdem stellt die vorliegende Arbeit ein neues Verfahren zur Berechnung einer oberen Schranke der Menge an Interferenz vor, die ein bestimmter Prozessorkern in einem beliebigen Zeitintervall einer gegebenen Länge höchstens erzeugen kann. Unsere Experimente zeigen, dass das vorgestellte Berechnungsverfahren präziser ist als die existierenden Verfahren.Deutsche Forschungsgemeinschaft (DFG) as part of the Transregional Collaborative Research Centre SFB/TR 14 (AVACS