16 research outputs found

    Agent Productivity Modeling in a Call Center Domain Using Attentive Convolutional Neural Networks

    Get PDF
    Article numbre 5489Measuring the productivity of an agent in a call center domain is a challenging task. Subjective measures are commonly used for evaluation in the current systems. In this paper, we propose an objective framework for modeling agent productivity for real estate call centers based on speech signal processing. The problem is formulated as a binary classification task using deep learning methods. We explore several designs for the classifier based on convolutional neural networks (CNNs), long-short-term memory networks (LSTMs), and an attention layer. The corpus consists of seven hours collected and annotated from three different call centers. The result shows that the speech-based approach can lead to significant improvements (1.57% absolute improvements) over a robust text baseline system

    A High-Accuracy and Power-Efficient Self-Optimizing Wireless Water Level Monitoring IoT Device for Smart City

    Get PDF
    In this paper; a novel self-optimizing water level monitoring methodology is proposed for smart city applications. Considering system maintenance; the efficiency of power consumption and accuracy will be important for Internet of Things (IoT) devices and systems. A multi-step measurement mechanism and power self-charging process are proposed in this study for improving the efficiency of a device for water level monitoring applications. The proposed methodology improved accuracy by 0.16–0.39% by moving the sensor to estimate the distance relative to different locations. Additional power is generated by executing a multi-step measurement while the power self-optimizing process used dynamically adjusts the settings to balance the current of charging and discharging. The battery level can efficiently go over 50% in a stable charging simulation. These methodologies were successfully implemented using an embedded control device; an ultrasonic sensor module; a LORA transmission module; and a stepper motor. According to the experimental results; the proposed multi-step methodology has the benefits of high accuracy and efficient power consumption for water level monitoring applications

    A dense neural network approach for detecting clone ID attacks on the RPL protocol of the IoT

    Get PDF
    At present, new data sharing technologies, such as those used in the Internet of Things (IoT) paradigm, are being extensively adopted. For this reason, intelligent security controls have become imperative. According to good practices and security information standards, particularly those regarding security in depth, several defensive layers are required to protect information assets. Within the context of IoT cyber-attacks, it is fundamental to continuously adapt new detection mechanisms for growing IoT threats, specifically for those becoming more sophisticated within mesh networks, such as identity theft and cloning. Therefore, current applications, such as Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and Security Information and Event Management Systems (SIEM), are becoming inadequate for accurately handling novel security incidents, due to their signature-based detection procedures using the matching and flagging of anomalous patterns. This project focuses on a seldom-investigated identity attack—the Clone ID attack—directed at the Routing Protocol for Low Power and Lossy Networks (RPL), the underlying technology for most IoT devices. Hence, a robust Artificial Intelligence-based protection framework is proposed, in order to tackle major identity impersonation attacks, which classical applications are prone to misidentifying. On this basis, unsupervised pre-training techniques are employed to select key characteristics from RPL network samples. Then, a Dense Neural Network (DNN) is trained to maximize deep feature engineering, with the aim of improving classification results to protect against malicious counterfeiting attempts

    SACA: Self-Aware Communication Architecture for IoT Using Mobile Fog Servers

    Get PDF

    A smartwater metering deployment based on the fog computing paradigm

    Get PDF
    In this paper, we look into smart water metering infrastructures that enable continuous, on-demand and bidirectional data exchange between metering devices, water flow equipment, utilities and end-users. We focus on the design, development and deployment of such infrastructures as part of larger, smart city, infrastructures. Until now, such critical smart city infrastructures have been developed following a cloud-centric paradigm where all the data are collected and processed centrally using cloud services to create real business value. Cloud-centric approaches need to address several performance issues at all levels of the network, as massive metering datasets are transferred to distant machine clouds while respecting issues like security and data privacy. Our solution uses the fog computing paradigm to provide a system where the computational resources already available throughout the network infrastructure are utilized to facilitate greatly the analysis of fine-grained water consumption data collected by the smart meters, thus significantly reducing the overall load to network and cloud resources. Details of the system's design are presented along with a pilot deployment in a real-world environment. The performance of the system is evaluated in terms of network utilization and computational performance. Our findings indicate that the fog computing paradigm can be applied to a smart grid deployment to reduce effectively the data volume exchanged between the different layers of the architecture and provide better overall computational, security and privacy capabilities to the system

    A Practical Evaluation of a High-Security Energy-Efficient Gateway for IoT Fog Computing Applications

    Get PDF
    [Abstract] Fog computing extends cloud computing to the edge of a network enabling new Internet of Things (IoT) applications and services, which may involve critical data that require privacy and security. In an IoT fog computing system, three elements can be distinguished: IoT nodes that collect data, the cloud, and interconnected IoT gateways that exchange messages with the IoT nodes and with the cloud. This article focuses on securing IoT gateways, which are assumed to be constrained in terms of computational resources, but that are able to offload some processing from the cloud and to reduce the latency in the responses to the IoT nodes. However, it is usually taken for granted that IoT gateways have direct access to the electrical grid, which is not always the case: in mission-critical applications like natural disaster relief or environmental monitoring, it is common to deploy IoT nodes and gateways in large areas where electricity comes from solar or wind energy that charge the batteries that power every device. In this article, how to secure IoT gateway communications while minimizing power consumption is analyzed. The throughput and power consumption of Rivest–Shamir–Adleman (RSA) and Elliptic Curve Cryptography (ECC) are considered, since they are really popular, but have not been thoroughly analyzed when applied to IoT scenarios. Moreover, the most widespread Transport Layer Security (TLS) cipher suites use RSA as the main public key-exchange algorithm, but the key sizes needed are not practical for most IoT devices and cannot be scaled to high security levels. In contrast, ECC represents a much lighter and scalable alternative. Thus, RSA and ECC are compared for equivalent security levels, and power consumption and data throughput are measured using a testbed of IoT gateways. The measurements obtained indicate that, in the specific fog computing scenario proposed, ECC is clearly a much better alternative than RSA, obtaining energy consumption reductions of up to 50% and a data throughput that doubles RSA in most scenarios. These conclusions are then corroborated by a frame temporal analysis of Ethernet packets. In addition, current data compression algorithms are evaluated, concluding that, when dealing with the small payloads related to IoT applications, they do not pay off in terms of real data throughput and power consumption.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; ED431C 2016-045Agencia Estatal de Investigación (España); TEC2013-47141-C4-1-RAgencia Estatal de Investigación (España); TEC2015-69648-REDCAgencia Estatal de Investigación (España); TEC2016-75067-C4-1-RGalicia. Consellería de Cultura, Educación e Ordenación Universitaria; ED341D2016/012Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; ED431G/0

    An Efficient Deep-Learning-Based Detection and Classification System for Cyber-Attacks in IoT Communication Networks

    Get PDF
    With the rapid expansion of intelligent resource-constrained devices and high-speed communication technologies, the Internet of Things (IoT) has earned wide recognition as the primary standard for low-power lossy networks (LLNs). Nevertheless, IoT infrastructures are vulnerable to cyber-attacks due to the constraints in computation, storage, and communication capacity of the endpoint devices. From one side, the majority of newly developed cyber-attacks are formed by slightly mutating formerly established cyber-attacks to produce a new attack that tends to be treated as normal traffic through the IoT network. From the other side, the influence of coupling the deep learning techniques with the cybersecurity field has become a recent inclination of many security applications due to their impressive performance. In this paper, we provide the comprehensive development of a new intelligent and autonomous deep-learning-based detection and classification system for cyber-attacks in IoT communication networks that leverage the power of convolutional neural networks, abbreviated as IoT-IDCS-CNN (IoT based Intrusion Detection and Classification System using Convolutional Neural Network). The proposed IoT-IDCS-CNN makes use of high-performance computing that employs the robust Compute Unified Device Architectures (CUDA) based Nvidia GPUs (Graphical Processing Units) and parallel processing that employs high-speed I9-core-based Intel CPUs. In particular, the proposed system is composed of three subsystems: a feature engineering subsystem, a feature learning subsystem, and a traffic classification subsystem. All subsystems were developed, verified, integrated, and validated in this research. To evaluate the developed system, we employed the Network Security Laboratory-Knowledge Discovery Databases (NSL-KDD) dataset, which includes all the key attacks in IoT computing. The simulation results demonstrated a greater than 99.3% and 98.2% cyber-attack classification accuracy for the binary-class classifier (normal vs. anomaly) and the multiclass classifier (five categories), respectively. The proposed system was validated using a K-fold cross-validation method and was evaluated using the confusion matrix parameters (i.e., true negative (TN), true positive (TP), false negative (FN), false positive (FP)), along with other classification performance metrics, including precision, recall, F1-score, and false alarm rate. The test and evaluation results of the IoT-IDCS-CNN system outperformed many recent machine-learning-based IDCS systems in the same area of study

    A novel ensemble of hybrid intrusion detection system for detecting internet of things attacks

    Get PDF
    The Internet of Things (IoT) has been rapidly evolving towards making a greater impact on everyday life to large industrial systems. Unfortunately, this has attracted the attention of cybercriminals who made IoT a target of malicious activities, opening the door to a possible attack to the end nodes. Due to the large number and diverse types of IoT devices, it is a challenging task to protect the IoT infrastructure using a traditional intrusion detection system. To protect IoT devices, a novel ensemble Hybrid Intrusion Detection System (HIDS) is proposed by combining a C5 classifier and One Class Support Vector Machine classifier. HIDS combines the advantages of Signature Intrusion Detection System (SIDS) and Anomaly-based Intrusion Detection System (AIDS). The aim of this framework is to detect both the well-known intrusions and zero-day attacks with high detection accuracy and low false-alarm rates. The proposed HIDS is evaluated using the Bot-IoT dataset, which includes legitimate IoT network traffic and several types of attacks. Experiments show that the proposed hybrid IDS provide higher detection rate and lower false positive rate compared to the SIDS and AIDS techniques. © 2019 by the authors. Licensee MDPI, Basel, Switzerland
    corecore