15 research outputs found

    Efficient Next-Hop Selection in Multi-Hop Routing for IoT Enabled Wireless Sensor Networks

    No full text
    The Internet of Things (IoT) paradigm allows the integration of cyber and physical worlds and other emerging technologies. IoT-enabled wireless sensor networks (WSNs) are rapidly gaining interest due to their ability to aggregate sensing data and transmit it towards the central or intermediate repositories, such as computational clouds and fogs. This paper presents an efficient multi-hop routing protocol (EMRP) for efficient data dissemination in IoT-enabled WSNs where hierarchy-based energy-efficient routing is involved. It considers a rank-based next-hop selection mechanism. For each device, it considers the residual energy to choose the route for data exchange. We extracted the residual energy at each node and evaluated it based on the connection degree to validate the maximum rank. It allowed us to identify the time slots for measuring the lifetime of the network. We also considered the battery expiry time of the first node to identify the network expiry time. We validated our work through extensive simulations using Network Simulator. We also implemented TCL scripts and C language code to configure low-power sensing devices, cluster heads and sink nodes. We extracted results from the trace files by utilizing AWK scripts. Results demonstrate that the proposed EMRP outperforms the existing related schemes in terms of the average lifetime, packet delivery ratio, time-slots, communication lost, communication area, first node expiry, number of alive nodes and residual energy

    Reliability-Aware Cooperative Routing with Adaptive Amplification for Underwater Acoustic Wireless Sensor Networks

    No full text
    The protocols in underwater acoustic wireless sensor networks (UAWSNs) that address reliability in packets forwarding usually consider the connectivity of the routing paths up to one- or two-hops. Since senor nodes are connected with one another using other nodes in their neighborhood, such protocols have compromised reliability. It is because these protocols do not guarantee the presence of neighbors beyond the selected one- or two-hops for connectivity and path establishment. This is further worsened by the harshness and unpredictability of the underwater scenario. In addition, establishment of the routing paths usually requires the nodes’ undersea geographical locations, which is infeasible because currents in water cause the nodes to move from one position to another. To overcome these challenges, this paper presents two routing schemes for UAWSNs: reliability-aware routing (RAR) and reliability-aware cooperative routing with adaptive amplification (RACAA). RAR considers complete path connectivity to advance packets to sea surface. This overcomes packets loss when connectivity is not established and forwarder nodes are not available for data routing. For all the established paths, the probability of successfully transmitting data packets is calculated. This avoids the adverse channel effects. However, sea channel is unpredictable and fluctuating and its properties may change after its computation and prior to information transmission. Therefore, cooperative routing is introduced to RAR with adaptive power control of relays, which makes the RACAA protocol. In RACAA, a relay node increases its transmit power than normal when the error in the data; it receives from the sender, is more than 50 % before transferring it further to destination. This further increases the reliability when such packets are forwarded. Unlike the conventional approach, the proposed protocols are independent of knowing the geographical locations of nodes in establishing the routes, which is computationally challenging due to nodes’ movements with ocean currents and tides. Simulation results exhibit that RAR and RACAA outperform the counterpart scheme in delivering packets to the water surface

    Smarter Traffic Prediction Using Big Data, In-Memory Computing, Deep Learning and GPUs

    No full text
    Road transportation is the backbone of modern economies, albeit it annually costs 1.25 million deaths and trillions of dollars to the global economy, and damages public health and the environment. Deep learning is among the leading-edge methods used for transportation-related predictions, however, the existing works are in their infancy, and fall short in multiple respects, including the use of datasets with limited sizes and scopes, and insufficient depth of the deep learning studies. This paper provides a novel and comprehensive approach toward large-scale, faster, and real-time traffic prediction by bringing four complementary cutting-edge technologies together: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). We trained deep networks using over 11 years of data provided by the California Department of Transportation (Caltrans), the largest dataset that has been used in deep learning studies. Several combinations of the input attributes of the data along with various network configurations of the deep learning models were investigated for training and prediction purposes. The use of the pre-trained model for real-time prediction was explored. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for smart cities, big data, high performance computing, and their convergence

    Rapid Transit Systems: Smarter Urban Planning Using Big Data, In-Memory Computing, Deep Learning, and GPUs

    No full text
    Rapid transit systems or metros are a popular choice for high-capacity public transport in urban areas due to several advantages including safety, dependability, speed, cost, and lower risk of accidents. Existing studies on metros have not considered appropriate holistic urban transport models and integrated use of cutting-edge technologies. This paper proposes a comprehensive approach toward large-scale and faster prediction of metro system characteristics by employing the integration of four leading-edge technologies: big data, deep learning, in-memory computing, and Graphics Processing Units (GPUs). Using London Metro as a case study, and the Rolling Origin and Destination Survey (RODS) (real) dataset, we predict the number of passengers for six time intervals (a) using various access transport modes to reach the train stations (buses, walking, etc.); (b) using various egress modes to travel from the metro station to their next points of interest (PoIs); (c) traveling between different origin-destination (OD) pairs of stations; and (d) against the distance between the OD stations. The prediction allows better spatiotemporal planning of the whole urban transport system, including the metro subsystem, and its various access and egress modes. The paper contributes novel deep learning models, algorithms, implementation, analytics methodology, and software tool for analysis of metro systems

    Large Field-Size Throughput/Area Accelerator for Elliptic-Curve Point Multiplication on FPGA

    No full text
    This article presents a throughput/area accelerator for elliptic-curve point multiplication over GF(2571). To optimize the throughput, we proposed an efficient hardware accelerator architecture for a fully recursive Karatsuba multiplier to perform polynomial multiplications in one clock cycle. To minimize the hardware resources, we have utilized the proposed Karatsuba multiplier for modular square implementations. Moreover, the Itoh-Tsujii algorithm for modular inverse computation is operated using multiplier resources. These strategies permit us to reduce the hardware resources of our implemented accelerator over a large field size of 571 bits. A controller is implemented to provide control functionalities. Our throughput/area accelerator is implemented in Verilog HDL using the Vivado IDE tool. The results after the place-and-route are given on Xilinx Virtex-6 and Virtex-7 devices. The utilized slices on Virtex-6 and Virtex-7 devices are 6107 and 5683, respectively. For the same FPGA devices, our accelerator can operate at a maximum of 319 MHz and 361 MHz. The latency values for Virtex-6 and Virtex-7 devices are 28.73 μs and 25.38 μs. The comparison to the state-of-the-art shows that the proposed architecture outperforms in throughput/area values. Thus, our accelerator architecture is suitable for cryptographic applications that demand a throughput and area simultaneously

    Machine Learning-Enabled Internet of Things (IoT): Data, Applications, and Industry Perspective

    No full text
    Machine learning (ML) allows the Internet of Things (IoT) to gain hidden insights from the treasure trove of sensed data and be truly ubiquitous without explicitly looking for knowledge and data patterns. Without ML, IoT cannot withstand the future requirements of businesses, governments, and individual users. The primary goal of IoT is to perceive what is happening in our surroundings and allow automation of decision-making through intelligent methods, which will mimic the decisions made by humans. In this paper, we classify and discuss the literature on ML-enabled IoT from three perspectives: data, application, and industry. We elaborate with dozens of cutting-edge methods and applications through a review of around 300 published sources on how ML and IoT work together to play a crucial role in making our environments smarter. We also discuss emerging IoT trends, including the Internet of Behavior (IoB), pandemic management, connected autonomous vehicles, edge and fog computing, and lightweight deep learning. Further, we classify challenges to IoT in four classes: technological, individual, business, and society. This paper will help exploit IoT opportunities and challenges to make our societies more prosperous and sustainable

    Machine Learning-Enabled Internet of Things (IoT): Data, Applications, and Industry Perspective

    No full text
    Machine learning (ML) allows the Internet of Things (IoT) to gain hidden insights from the treasure trove of sensed data and be truly ubiquitous without explicitly looking for knowledge and data patterns. Without ML, IoT cannot withstand the future requirements of businesses, governments, and individual users. The primary goal of IoT is to perceive what is happening in our surroundings and allow automation of decision-making through intelligent methods, which will mimic the decisions made by humans. In this paper, we classify and discuss the literature on ML-enabled IoT from three perspectives: data, application, and industry. We elaborate with dozens of cutting-edge methods and applications through a review of around 300 published sources on how ML and IoT work together to play a crucial role in making our environments smarter. We also discuss emerging IoT trends, including the Internet of Behavior (IoB), pandemic management, connected autonomous vehicles, edge and fog computing, and lightweight deep learning. Further, we classify challenges to IoT in four classes: technological, individual, business, and society. This paper will help exploit IoT opportunities and challenges to make our societies more prosperous and sustainable

    QoS Enabled Layered Based Clustering for Reactive Flooding in the Internet of Things

    No full text
    The Internet of Things has gained substantial attention over the last few years, because of connecting daily things in a wide range of application and domains. A large number of sensors require bandwidth and network resources to give-and-take queries among a heterogeneous IoT network. Network flooding is a key questioning strategy for successful exchange of queries. However, the risk of the original flooding is prone to unwanted and redundant network queries which may lead to heavy network traffic. Redundant, unwanted, and flooded queries are major causes of inefficient utilization of resources. IoT devices consume more energy and high computational time. More queries leads to consumption of more bandwidth, cost, and miserable QoS. Current existing approaches focused primarily on how to speed up the basic routing for IoT devices. However, solutions for flooding are not being addressed. In this paper, we propose a cluster-based flooding (CBF) as an interoperable solution for network and sensor layer devices which is also capable minimizing the energy consumption, cost, network flooding, identifying, and eliminating of redundant flooding queries using query control mechanisms. The proposed CBF divides the network into different clusters, local queries for information are proactively maintained by the intralayer cluster (IALC), while the interlayer cluster (IELC) is responsible for reactively obtain the routing queries to the destinations outside the cluster. CBF is a hybrid approach, having the potential to be more efficient against traditional schemes in term of query traffic generation. However, in the absence of appropriate redundant query detection and termination techniques, the CBF may generate more control traffic compared to the standard flooding techniques. In this research work, we used Cooja simulator to evaluate the performance of the proposed CBF. According to the simulation results the proposed technique has superiority in term of traffic delay, QoS/throughput, and energy consumption, under various performance metrics compared with traditional flooding and state of the art
    corecore