3 research outputs found

    A generalized laser simulator algorithm for optimal path planning in constraints environment

    Get PDF
    Path planning plays a vital role in autonomous mobile robot navigation, and it has thus become one of the most studied areas in robotics. Path planning refers to a robot's search for a collision-free and optimal path from a start point to a predefined goal position in a given environment. This research focuses on developing a novel path planning algorithm, called Generalized Laser Simulator (GLS), to solve the path planning problem of mobile robots in a constrained environment. This approach allows finding the path for a mobile robot while avoiding obstacles, searching for a goal, considering some constraints and finding an optimal path during the robot movement in both known and unknown environments. The feasible path is determined between the start and goal positions by generating a wave of points in all directions towards the goal point with adhering to constraints. A simulation study employing the proposed approach is applied to the grid map settings to determine a collision-free path from the start to goal positions. First, the grid mapping of the robot's workspace environment is constructed, and then the borders of the workspace environment are detected based on the new proposed function. This function guides the robot to move toward the desired goal. Two concepts have been implemented to find the best candidate point to move next: minimum distance to goal and maximum index distance to the boundary, integrated by negative probability to sort out the most preferred point for the robot trajectory determination. In order to construct an optimal collision-free path, an optimization step was included to find out the minimum distance within the candidate points that have been determined by GLS while adhering to particular constraint's rules and avoiding obstacles. The proposed algorithm will switch its working pattern based on the goal minimum and boundary maximum index distances. For static obstacle avoidance, the boundaries of the obstacle(s) are considered borders of the environment. However, the algorithm detects obstacles as a new border in dynamic obstacles once it occurs in front of the GLS waves. The proposed method has been tested in several test environments with different degrees of complexity. Twenty different arbitrary environments are categorized into four: Simple, complex, narrow, and maze, with five test environments in each. The results demonstrated that the proposed method could generate an optimal collision-free path. Moreover, the proposed algorithm result are compared to some common algorithms such as the A* algorithm, Probabilistic Road Map, RRT, Bi-directional RRT, and Laser Simulator algorithm to demonstrate its effectiveness. The suggested algorithm outperforms the competition in terms of improving path cost, smoothness, and search time. A statistical test was used to demonstrate the efficiency of the proposed algorithm over the compared methods. The GLS is 7.8 and 5.5 times faster than A* and LS, respectively, generating a path 1.2 and 1.5 times shorter than A* and LS. The mean value of the path cost achieved by the proposed approach is 4% and 15% lower than PRM and RRT, respectively. The mean path cost generated by the LS algorithm, on the other hand, is 14% higher than that generated by the PRM. Finally, to verify the performance of the developed method for generating a collision-free path, experimental studies were carried out using an existing WMR platform in labs and roads. The experimental work investigates complete autonomous WMR path planning in the lab and road environments using live video streaming. The local maps were built using data from live video streaming s by real-time image processing to detect the segments of the lab and road environments. The image processing includes several operations to apply GLS on the prepared local map. The proposed algorithm generates the path within the prepared local map to find the path between start-to-goal positions to avoid obstacles and adhere to constraints. The experimental test shows that the proposed method can generate the shortest path and best smooth trajectory from start to goal points in comparison with the laser simulator

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as enhanced Mobile Broadband (eMBB), massive Machine Type Communications (mMTC) and Ultra-Reliable and Low Latency Communications (URLLC), the mMTC brings the unique technical challenge of supporting a huge number of MTC devices in cellular networks, which is the main focus of this paper. The related challenges include Quality of Service (QoS) provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and Narrowband IoT (NB-IoT). Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenario along with the recent advances towards enhancing its learning performance and convergence. Finally, we discuss some open research challenges and promising future research directions
    corecore