19 research outputs found

    A Multitier Deep Learning Model for Arrhythmia Detection

    Get PDF
    Electrocardiograph (ECG) is employed as a primary tool for diagnosing cardiovascular diseases (CVD) in the hospital, which often helps in the early detection of such ailments. ECG signals provide a framework to probe the underlying properties and enhance the initial diagnosis obtained via traditional tools and patient-doctor dialogues. It provides cardiologists with inferences regarding more serious cases. Notwithstanding its proven utility, deciphering large datasets to determine appropriate information remains a challenge in ECG-based CVD diagnosis and treatment. Our study presents a deep neural network (DNN) strategy to ameliorate the aforementioned difficulties. Our strategy consists of a learning stage where classification accuracy is improved via a robust feature extraction. This is followed using a genetic algorithm (GA) process to aggregate the best combination of feature extraction and classification. The MIT-BIH Arrhythmia was employed in the validation to identify five arrhythmia categories based on the association for the advancement of medical instrumentation (AAMI) standard. The performance of the proposed technique alongside state-of-the-art in the area shows an increase of 0.94 and 0.953 in terms of average accuracy and F1 score, respectively. The proposed model could serve as an analytic module to alert users and/or medical experts when anomalies are detected in the acquired ECG data in a smart healthcare framework

    Viriot: A cloud of things that offers iot infrastructures as a service

    Get PDF
    Many cloud providers offer IoT services that simplify the collection and processing of IoT information. However, the IoT infrastructure composed of sensors and actuators that produces this information remains outside the cloud; therefore, application developers must install, connect and manage the cloud. This requirement can be a market barrier, especially for small/medium software companies that cannot afford the infrastructural costs associated with it and would only prefer to focus on IoT application developments. Motivated by the wish to eliminate this barrier, this paper proposes a Cloud of Things platform, called VirIoT, which fully brings the Infrastructure as a service model typical of cloud computing to the world of Internet of Things. VirIoT provides users with virtual IoT infrastructures (Virtual Silos) composed of virtual things, with which users can interact through dedicated and standardized broker servers in which the technology can be chosen among those offered by the platform, such as oneM2M, NGSI and NGSI-LD. VirIoT allows developers to focus their efforts exclusively on IoT applications without worrying about infrastructure management and allows cloud providers to expand their IoT services portfolio. VirIoT uses external things and cloud/edge computing resources to deliver the IoT virtualization services. Its open-source architecture is microservice-based and runs on top of a distributed Kubernetes platform with nodes in central and edge data centers. The architecture is scalable, efficient and able to support the continuous integration of heterogeneous things and IoT standards, taking care of interoperability issues. Using a VirIoT deployment spanning data centers in Europe and Japan, we conducted a performance evaluation with a two-fold objective: showing the efficiency and scalability of the architecture; and leveraging VirIoT’s ability to integrate different IoT standards in order to make a fair comparison of some open-source IoT Broker implementations, namely Mobius for oneM2M, Orion for NGSIv2, Orion-LD and Scorpio for NGSI-LD

    Implementation of Blockchain-Assisted Source Routing for Traffic Management in Software-Defined Networks

    Get PDF
    The control and infrastructure layers are split into Software-Defined Networks (SDNs). With the control and infrastructure planes split, new network applications may be developed with more simplicity and greater independence. On the other hand, the disadvantages of SDN create a slew of questions. In large-scale networks, such as Wide Area Networks (WANs) covering huge areas, more propagation delays substantially contribute to network convergence time. In addition, traditional SDN restricts network design flexibility due to the influence of controller location on network performance in large-scale networks. SDN-based source routing (SR) has emerged as a viable solution to the issues above, where the packet header field is used to specify a packet's route. This study presents an SR-based End-to-End (E2E) traffic management framework called SoRBlock. In SoRBlock, inter-domain routing uses blockchain technology, while intra-domain routing relies on the SR technique in SDNs. The simulation results show that the proposed SR-based SoRBlock framework outperforms the traditional hierarchical routing approach, HRA, in SDN networks by lowering path setup time (PST) and the number of controller messages. While the same (i.e., identical origin and target) service requests were used for all runs in the simulations, the proposed SoRBlock architecture presents almost three times less total PST between 45ms and 65ms than the HRA method between 130ms and 200ms due to the HRA approach's increased node-controller and controller-controller latencies. On the other hand, SoRBlock shows two times less PST ([75ms – 90ms]) than HRA ([150ms – 175ms]) when different service requests (i.e., different origin and target) were used. Concerning Controller Messages Processed (CMP), the HRA deals nearly 50% more controller messages between 7 and 15 than the SoRBlock between 3 and 10 when the number of domains varies, while the CMP in the SoRBlock scheme ([10 - 17]) approaches that in the HRA framework ([15 - 20]) regarding the ratio while the count of nodes rises in domains

    Explainable and Resource-Efficient Stream Processing Through Provenance and Scheduling

    Get PDF
    In our era of big data, information is captured at unprecedented volumes and velocities, with technologies such as Cyber-Physical Systems making quick decisions based on the processing of streaming, unbounded datasets. In such scenarios, it can be beneficial to process the data in an online manner, using the stream processing paradigm implemented by Stream Processing Engines (SPEs). While SPEs enable high-throughput, low-latency analysis, they are faced with challenges connected to evolving deployment scenarios, like the increasing use of heterogeneous, resource-constrained edge devices together with cloud resources and the increasing user expectations for usability, control, and resource-efficiency, on par with features provided by traditional databases.This thesis tackles open challenges regarding making stream processing more user-friendly, customizable, and resource-efficient. The first part outlines our work, providing high-level background information, descriptions of the research problems, and our contributions. The second part presents our three state-of-the-art frameworks for explainable data streaming using data provenance, which can help users of streaming queries to identify important data points, explain unexpected behaviors, and aid query understanding and debugging. (A) GeneaLog provides backward provenance allowing users to identify the inputs that contributed to the generation of each output of a streaming query. (B) Ananke is the first framework to provide a duplicate-free graph of live forward provenance, enabling easy bidirectional tracing of input-output relationships in streaming queries and identifying data points that have finished contributing to results. (C) Erebus is the first framework that allows users to define expectations about the results of a streaming query, validating whether these expectations are met or providing explanations in the form of why-not provenance otherwise. The third part presents techniques for execution efficiency through custom scheduling, introducing our state-of-the-art scheduling frameworks that control resource allocation and achieve user-defined performance goals. (D) Haren is an SPE-agnostic user-level scheduler that can efficiently enforce user-defined scheduling policies. (E) Lachesis is a standalone scheduling middleware that requires no changes to SPEs but, instead, directly guides the scheduling decisions of the underlying Operating System. Our extensive evaluations using real-world SPEs and workloads show that our work significantly improves over the state-of-the-art while introducing only small performance overheads

    Secure and Efficient Models for Retrieving Data from Encrypted Databases in Cloud

    Get PDF
    Recently, database users have begun to use cloud database services to outsource their databases. The reason for this is the high computation speed and the huge storage capacity that cloud owners provide at low prices. However, despite the attractiveness of the cloud computing environment to database users, privacy issues remain a cause for concern for database owners since data access is out of their control. Encryption is the only way of assuaging users’ fears surrounding data privacy, but executing Structured Query Language (SQL) queries over encrypted data is a challenging task, especially if the data are encrypted by a randomized encryption algorithm. Many researchers have addressed the privacy issues by encrypting the data using deterministic, onion layer, or homomorphic encryption. Nevertheless, even with these systems, the encrypted data can still be subjected to attack. In this research, we first propose an indexing scheme to encode the original table’s tuples into bit vectors (BVs) prior to the encryption. The resulting index is then used to narrow the range of retrieved encrypted records from the cloud to a small set of records that are candidates for the user’s query. Based on the indexing scheme, we then design three different models to execute SQL queries over the encrypted data. The data are encrypted by a single randomized encryption algorithm, namely the Advanced Encryption Standard AES-CBC. In each proposed scheme, we use a different (secure) method for storing and maintaining the index values (BVs) (i.e., either at user’s side or at the cloud server), and we extend each system to support most of relational algebra operators, such as select, join, etc. Implementation and evaluation of the proposed systems reveals that they are practical and efficient at reducing both the computation and space overhead when compared with state-of-the-art systems like CryptDB

    Defining Service Level Agreements in Serverless Computing

    Get PDF
    The emergence of serverless computing has brought significant advancements to the delivery of computing resources to cloud users. With the abstraction of infrastructure, ecosystem, and execution environments, users could focus on their code while relying on the cloud provider to manage the abstracted layers. In addition, desirable features such as autoscaling and high availability became a provider’s responsibility and can be adopted by the user\u27s application at no extra overhead. Despite such advancements, significant challenges must be overcome as applications transition from monolithic stand-alone deployments to the ephemeral and stateless microservice model of serverless computing. These challenges pertain to the uniqueness of the conceptual and implementation models of serverless computing. One of the notable challenges is the complexity of defining Service Level Agreements (SLA) for serverless functions. As the serverless model shifts the administration of resources, ecosystem, and execution layers to the provider, users become mere consumers of the provider’s abstracted platform with no insight into its performance. Suboptimal conditions of the abstracted layers are not visible to the end-user who has no means to assess their performance. Thus, SLA in serverless computing must take into consideration the unique abstraction of its model. This work investigates the Service Level Agreement (SLA) modeling of serverless functions\u27 and serverless chains’ executions. We highlight how serverless SLA fundamentally differs from earlier cloud delivery models. We then propose an approach to define SLA for serverless functions by utilizing resource utilization fingerprints for functions\u27 executions and a method to assess if executions adhere to that SLA. We evaluate the approach’s accuracy in detecting SLA violations for a broad range of serverless application categories. Our validation results illustrate a high accuracy in detecting SLA violations resulting from resource contentions and provider’s ecosystem degradations. We conclude by presenting the empirical validation of our proposed approach, which could detect Execution-SLA violations with accuracy up to 99%

    Who's Behind ICE: The Tech and Data Companies Fueling Deportations

    Get PDF
    Tech is transforming immigration enforcement. As advocates have known for some time, the immigration and criminal justice systems have powerful allies in Silicon Valley and Congress, with technology companies playing an increasingly central role in facilitating the expansion and acceleration of arrests, detentions, and deportations. What is less known outside of Silicon Valley is the long history of the technology industry's "revolving door" relationship with federal agencies, how the technology industry and its products and services are now actually circumventing city- and state-level protections for vulnerable communities, and what we can do to expose and hold these actors accountable.Mijente, the National Immigration Project, and the Immigrant Defense Project — immigration and Latinx-focused organizations working at the intersection of new technology, policing, and immigration — commissioned Empower LLC to undertake critical research about the multi-layered technology infrastructure behind the accelerated and expansive immigration enforcement we're seeing today, and the companies that are behind it. The report opens a window into the Department of Homeland Security's (DHS) plans for immigration policing through a scheme of tech and database policing, the mass scale and scope of the tech-based systems, the contracts that support it, and the connections between Washington, D.C., and Silicon Valley. It surveys and investigates the key contracts that technology companies have with DHS, particularly within Immigration and Customs Enforcement (ICE), and their success in signing new contracts through intensive and expensive lobbying
    corecore