5 research outputs found

    Road conditions monitoring using semantic segmentation of smartphone motion sensor data

    Get PDF
    Many studies and publications have been written about the use of moving object analysis to locate a specific item or replace a lost object in video sequences. Using semantic analysis, it could be challenging to pinpoint each meaning and follow the movement of moving objects. Some machine learning algorithms have turned to the right interpretation of photos or video recordings to communicate coherently. The technique converts visual patterns and features into visual language using dense and sparse optical flow algorithms. To semantically partition smartphone motion sensor data for any video categorization, using integrated bidirectional Long Short-Term Memory layers, this paper proposes a redesigned U-Net architecture. Experiments show that the proposed technique outperforms several existing semantic segmentation algorithms using z-axis accelerometer and z-axis gyroscope properties. The video sequence's numerous moving elements are synchronised with one another to follow the scenario. Also, the objective of this work is to assess the proposed model on roadways and other moving objects using five datasets (self-made dataset and the pothole600 dataset). After looking at the map or tracking an object, the results should be given together with the diagnosis of the moving object and its synchronization with video clips. The suggested model's goals were developed using a machine learning method that combines the validity of the results with the precision of finding the necessary moving parts. Python 3.7 platforms were used to complete the project since they are user-friendly and highly efficient platforms

    Automatic image annotation system using deep learning method to analyse ambiguous images

    Get PDF
    Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation, image annotation, which may semantically describe images, has a variety of uses in allied industries including urban planning engineering. Even without big data and image identification technologies, it is challenging to manually analyze a diverse variety of photos. The improvements to the Automated Image Annotation (AIA) label system have been the subject of several scholarly research. The authors will discuss how to use image databases and the AIA system in this essay. The proposed method extracts image features from photos using an improved VGG-19, and then uses nearby features to automatically forecast picture labels. The proposed study accounts for both correlations between labels and images as well as correlations within images. The number of labels is also estimated using a label quantity prediction (LQP) model, which improves label prediction precision. The suggested method addresses automatic annotation methodologies for pixel-level images of unusual things while incorporating supervisory information via interactive spherical skins. The genuine things that were converted into metadata and identified as being connected to pre-existing categories were categorized by the authors using a deep learning approach called a conventional neural network (CNN) - supervised. Certain object monitoring systems strive for a high item detection rate (true-positive), followed by a low availability rate (false-positive). The authors created a KD-tree based on k-nearest neighbors (KNN) to speed up annotating. In order to take into account for the collected image backdrop. The proposed method transforms the conventional two-class object detection problem into a multi-class classification problem, breaking the separated and identical distribution estimations on machine learning methodologies. It is also simple to use because it only requires pixel information and ignores any other supporting elements from various color schemes. The following factors are taken into consideration while comparing the five different AIA approaches: main idea, significant contribution, computational framework, computing speed, and annotation accuracy. A set of publicly accessible photos that serve as standards for assessing AIA methods is also provided, along with a brief description of the four common assessment signs
    corecore