14 research outputs found

    Automatic image annotation system using deep learning method to analyse ambiguous images

    Get PDF
    Image annotation has gotten a lot of attention recently because of how quickly picture data has expanded. Together with image analysis and interpretation, image annotation, which may semantically describe images, has a variety of uses in allied industries including urban planning engineering. Even without big data and image identification technologies, it is challenging to manually analyze a diverse variety of photos. The improvements to the Automated Image Annotation (AIA) label system have been the subject of several scholarly research. The authors will discuss how to use image databases and the AIA system in this essay. The proposed method extracts image features from photos using an improved VGG-19, and then uses nearby features to automatically forecast picture labels. The proposed study accounts for both correlations between labels and images as well as correlations within images. The number of labels is also estimated using a label quantity prediction (LQP) model, which improves label prediction precision. The suggested method addresses automatic annotation methodologies for pixel-level images of unusual things while incorporating supervisory information via interactive spherical skins. The genuine things that were converted into metadata and identified as being connected to pre-existing categories were categorized by the authors using a deep learning approach called a conventional neural network (CNN) - supervised. Certain object monitoring systems strive for a high item detection rate (true-positive), followed by a low availability rate (false-positive). The authors created a KD-tree based on k-nearest neighbors (KNN) to speed up annotating. In order to take into account for the collected image backdrop. The proposed method transforms the conventional two-class object detection problem into a multi-class classification problem, breaking the separated and identical distribution estimations on machine learning methodologies. It is also simple to use because it only requires pixel information and ignores any other supporting elements from various color schemes. The following factors are taken into consideration while comparing the five different AIA approaches: main idea, significant contribution, computational framework, computing speed, and annotation accuracy. A set of publicly accessible photos that serve as standards for assessing AIA methods is also provided, along with a brief description of the four common assessment signs

    Graph Cuts with Arbitrary Size Constraints Through Optimal Transport

    Full text link
    A common way of partitioning graphs is through minimum cuts. One drawback of classical minimum cut methods is that they tend to produce small groups, which is why more balanced variants such as normalized and ratio cuts have seen more success. However, we believe that with these variants, the balance constraints can be too restrictive for some applications like for clustering of imbalanced datasets, while not being restrictive enough for when searching for perfectly balanced partitions. Here, we propose a new graph cut algorithm for partitioning graphs under arbitrary size constraints. We formulate the graph cut problem as a regularized Gromov-Wasserstein problem. We then propose to solve it using accelerated proximal GD algorithm which has global convergence guarantees, results in sparse solutions and only incurs an additional ratio of O(logā”(n))\mathcal{O}(\log(n)) compared to the classical spectral clustering algorithm but was seen to be more efficient

    Analysis of Automatic Annotations of Real Video Surveillance Images

    Get PDF
    The results of the analysis of the automatic annotations of real video surveillance sequences are presented. The annotations of the frames of surveillance sequences of the parking lot of a university campus are generated. The purpose of the analysis is to evaluate the quality of the descriptions and analyze the correspondence between the semantic content of the images and the corresponding annotation. To perform the tests, a fixed camera was placed in the campus parking lot and video sequences of about 20 minutes were obtained, later each frame was annotated individually and a text repository with all the annotations was formed. It was observed that it is possible to take advantage of the properties of the video to evaluate the performance of the annotator and the example of the crossing of a pedestrian is presented as an example for its analysis

    Interactive air pollution mapping tool for experts

    Get PDF
    Air pollution, especially in city areas, has been a big concern for decades now. Real-time air quality monitoring stations have become the standard in measuring air pollution for their accuracy and reliability. As a result, extensive pollution maps are nowadays created mainly using information from these stations. Two types of pollution mapping solutions are most prominent: maps that display sparse monitoring stations' locations and respective gathered time varying air pollution data; and estimated dense pollution heatmaps resulting from a combination of air quality sensor data and additional data, such as meteorological and traffic information. In alternative, this dissertation proposes the use of expert knowledge as a complementary means for generating air quality maps. The goal is to allow experts to express their knowledge about how air pollution is emitted and diffused as a function of the presence of key topological elements, such as buildings and roads. To this end, a tool was developed and validated with a set of 30 participants. The obtained results confirm the tool's usability and highlight key future research directions to bring the proposed concept closer to a fully functional solution.HĆ” dĆ©cadas que a poluiĆ§Ć£o do ar, especialmente em zona citadinas, tem sido uma grande preocupaĆ§Ć£o. EstaƧƵes de monitorizaĆ§Ć£o da qualidade do ar em tempo real tornaram-se o padrĆ£o para obter mediƧƵes de poluiĆ§Ć£o no ar devido Ć  sua precisĆ£o e confiabilidade. Como resultado, atualmente, mapas de poluiĆ§Ć£o extensivos sĆ£o criados com o uso de informaĆ§Ć£o recolhida destas estaƧƵes. Dois tipos de soluƧƵes para mapeamento de poluiĆ§Ć£o sĆ£o proeminentes: mapas que exibem localizaƧƵes de estaƧƵes de monitorizaĆ§Ć£o e respetivos dados de poluiĆ§Ć£o do ar; e mapas de calor de poluiĆ§Ć£o estimados a partir de uma combinaĆ§Ć£o de dados recolhidos de sensores de qualidade do ar e dados adicionais, como informaĆ§Ć£o meteorolĆ³gica e de trĆ”fego. Em alternativa, esta dissertaĆ§Ć£o propƵe o uso de conhecimento de peritos como uma forma complementar de geraĆ§Ć£o de mapas de qualidade do ar. O objetivo Ć© permitir a partilha de conhecimento de peritos acerca de como a poluiĆ§Ć£o do ar Ć© emitida e difundida em funĆ§Ć£o da presenƧa de elementos topolĆ³gicos chave, como edifĆ­cios e estradas. Para este fim, uma ferramenta foi desenvolvida e testada com um conjunto de 30 participantes. Os resultados obtidos confirmam a usabilidade da ferramenta e realƧam futuras direƧƵes de investigaĆ§Ć£o para aproximar o conceito proposto de uma soluĆ§Ć£o completa e funcional
    corecore