4 research outputs found

    Development of 3D city model using videogrammetry technique

    Get PDF
    3D city model is a representation of urban area in digital format that contains building and other information. The current approaches are using photogrammetry and laser scanning to develop 3D city model. However, these techniques are time consuming and quite costly. Besides that, laser scanning and photogrammetry need professional skills and expertise to handle hardware and tools. In this study, videogrammetry is proposed as a technique to develop 3D city model. This technique uses video frame sequences to generate point cloud. Videos are processed using EyesCloud3D by eCapture. EyesCloud3D allows user to upload raw data of video format to generate point clouds. There are five main phases in this study to generate 3D city model which are calibration, video recording, point cloud extraction, 3D modeling and 3D city model representation. In this study, 3D city model with Level of Detail 2 is produced. Simple query is performed from the database to retrieve the attributes of the 3D city model

    Active key frame selection for 3D model reconstruction from crowdsourced geo-tagged videos

    No full text

    Aplicando Crowdsourcing na Sincronização de Vídeos Gerados por Usuários

    Get PDF
    Crowdsourcing é uma estratégia para resolução de problemas baseada na coleta de resultados parciais a partir das contribuições de indivíduos, agregando-as em um resultado unificado. Com base nesta estratégia, esta tese mostra como a crowd é capaz de sincronizar um conjunto de vídeos produzidos por usuários quaisquer, correlacionados a um mesmo evento social. Cada usuário filma o evento com seu ponto de vista e de acordo com suas limitações (ângulo do evento, oclusões na filmagem, qualidade da câmera utilizada, etc.). Nesse cenário, não é possível garantir que todos os conteúdos gerados possuam características homogêneas (instante de início e duração de captura, resolução, qualidade, etc.), dificultando o uso de um processo puramente automático de sincronização. Além disso, os vídeos gerados por usuário são disponibilizados de forma distribuída entre diversos servidores de conteúdo independentes. A hipótese desta tese é que a capacidade de adaptação da inteligência humana pode ser usada para processar um grupo de vídeos produzidos de forma descoordenada e distribuída, e relacionados a um mesmo evento social, gerando a sua sincronização. Para comprovar esta hipótese, as seguintes etapas foram executadas: (i) o desenvolvimento de um método de sincronização para múltiplos vídeos provenientes de fontes independentes; (ii) a execução de um mapeamento sistemático acerca do uso de crowdsourcing para processamento de vídeos; (iii) o desenvolvimento de técnicas para o uso da crowd na sincronização de vídeos; (iv) o desenvolvimento de um modelo funcional para desenvolvimento de aplicações de sincronização utilizando crowdsourcing, que pode ser estendido para aplicações de vídeos em geral; e (v) a realização de experimentos que permitem mostrar a capacidade da crowd para realizar a sincronização. Os resultados encontrados após estas etapas mostram que a crowd é capaz de participar do processo de sincronização e que diversos fatores podem influenciar na precisão dos resultados obtidos

    Individual and group dynamic behaviour patterns in bound spaces

    Get PDF
    The behaviour analysis of individual and group dynamics in closed spaces is a subject of extensive research in both academia and industry. However, despite recent technological advancements the problem of implementing the existing methods for visual behaviour data analysis in production systems remains difficult and the applications are available only in special cases in which the resourcing is not a problem. Most of the approaches concentrate on direct extraction and classification of the visual features from the video footage for recognising the dynamic behaviour directly from the source. The adoption of such an approach allows recognising directly the elementary actions of moving objects, which is a difficult task on its own. The major factor that impacts the performance of the methods for video analytics is the necessity to combine processing of enormous volume of video data with complex analysis of this data using and computationally resourcedemanding analytical algorithms. This is not feasible for many applications, which must work in real time. In this research, an alternative simulation-based approach for behaviour analysis has been adopted. It can potentially reduce the requirements for extracting information from real video footage for the purpose of the analysis of the dynamic behaviour. This can be achieved by combining only limited data extracted from the original video footage with a symbolic data about the events registered on the scene, which is generated by 3D simulation synchronized with the original footage. Additionally, through incorporating some physical laws and the logics of dynamic behaviour directly in the 3D model of the visual scene, this framework allows to capture the behavioural patterns using simple syntactic pattern recognition methods. The extensive experiments with the prototype implementation prove in a convincing manner that the 3D simulation generates sufficiently rich data to allow analysing the dynamic behaviour in real-time with sufficient adequacy without the need to use precise physical data, using only a limited data about the objects on the scene, their location and dynamic characteristics. This research can have a wide applicability in different areas where the video analytics is necessary, ranging from public safety and video surveillance to marketing research to computer games and animation. Its limitations are linked to the dependence on some preliminary processing of the video footage which is still less detailed and computationally demanding than the methods which use directly the video frames of the original footage
    corecore