381 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationScene labeling is the problem of assigning an object label to each pixel of a given image. It is the primary step towards image understanding and unifies object recognition and image segmentation in a single framework. A perfect scene labeling framework detects and densely labels every region and every object that exists in an image. This task is of substantial importance in a wide range of applications in computer vision. Contextual information plays an important role in scene labeling frameworks. A contextual model utilizes the relationships among the objects in a scene to facilitate object detection and image segmentation. Using contextual information in an effective way is one of the main questions that should be answered in any scene labeling framework. In this dissertation, we develop two scene labeling frameworks that rely heavily on contextual information to improve the performance over state-of-the-art methods. The first model, called the multiclass multiscale contextual model (MCMS), uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates crossobject and interobject information into one probabilistic framework, and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. The second model, called the contextual hierarchical model (CHM), learns contextual information in a hierarchy for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. The CHM then incorporates the resulting multiresolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. We demonstrate the performance of CHM on different challenging tasks such as outdoor scene labeling and edge detection in natural images and membrane detection in electron microscopy images. We also introduce two novel classification methods. WNS-AdaBoost speeds up the training of AdaBoost by providing a compact representation of a training set. Disjunctive normal random forest (DNRF) is an ensemble method that is able to learn complex decision boundaries and achieves low generalization error by optimizing a single objective function for each weak classifier in the ensemble. Finally, a segmentation framework is introduced that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy images

    Going beyond semantic image segmentation, towards holistic scene understanding, with associative hierarchical random fields

    Get PDF
    In this thesis we exploit the generality and expressive power of the Associative Hierarchical Random Field (AHRF) graphical model to take its use beyond that of semantic image segmentation, into object-classes, towards a framework for holistic scene understanding. We provide a working definition for the holistic approach to scene understanding, which allows for the integration of existing, disparate, applications into an unifying ensemble. We believe that modelling such an ensemble as an AHRF is both a principled and pragmatic solution. We present a hierarchy that shows several methods for fusing applications together with the AHRF graphical model. Each of the three; feature, potential and energy, layers subsumes its predecessor in generality and together give rise to many options for integration. With applications on street scenes we demonstrate an implementation of each layer. The first layer application joins appearance and geometric features. For our second layer we implement a things and stuff co-junction using higher order AHRF potentials for object detectors, with the goal of answering the classic questions: What? Where? and How many? A holistic approach to recognition-and-reconstruction is realised within our third layer by linking two energy based formulations of both applications. Each application is evaluated qualitatively and quantitatively. In all cases our holistic approach shows improvement over baseline methods

    Online Structured Learning for Real-Time Computer Vision Gaming Applications

    Get PDF
    In recent years computer vision has played an increasingly important role in the development of computer games, and it now features as one of the core technologies for many gaming platforms. The work in this thesis addresses three problems in real-time computer vision, all of which are motivated by their potential application to computer games. We rst present an approach for real-time 2D tracking of arbitrary objects. In common with recent research in this area we incorporate online learning to provide an appearance model which is able to adapt to the target object and its surrounding background during tracking. However, our approach moves beyond the standard framework of tracking using binary classication and instead integrates tracking and learning in a more principled way through the use of structured learning. As well as providing a more powerful framework for adaptive visual object tracking, our approach also outperforms state-of-the-art tracking algorithms on standard datasets. Next we consider the task of keypoint-based object tracking. We take the traditional pipeline of matching keypoints followed by geometric verication and show how this can be embedded into a structured learning framework in order to provide principled adaptivity to a given environment. We also propose an approximation method allowing us to take advantage of recently developed binary image descriptors, meaning our approach is suitable for real-time application even on low-powered portable devices. Experimentally, we clearly see the benet that online adaptation using structured learning can bring to this problem. Finally, we present an approach for approximately recovering the dense 3D structure of a scene which has been mapped by a simultaneous localisation and mapping system. Our approach is guided by the constraints of the low-powered portable hardware we are targeting, and we develop a system which coarsely models the scene using a small number of planes. To achieve this, we frame the task as a structured prediction problem and introduce online learning into our approach to provide adaptivity to a given scene. This allows us to use relatively simple multi-view information coupled with online learning of appearance to efficiently produce coarse reconstructions of a scene

    A random forest approach to segmenting and classifying gestures

    Full text link
    This thesis investigates a gesture segmentation and recognition scheme that employs a random forest classification model. A complete gesture recognition system should localize and classify each gesture from a given gesture vocabulary, within a continuous video stream. Thus, the system must determine the start and end points of each gesture in time, as well as accurately recognize the class label of each gesture. We propose a unified approach that performs the tasks of temporal segmentation and classification simultaneously. Our method trains a random forest classification model to recognize gestures from a given vocabulary, as presented in a training dataset of video plus 3D body joint locations, as well as out-of-vocabulary (non-gesture) instances. Given an input video stream, our trained model is applied to candidate gestures using sliding windows at multiple temporal scales. The class label with the highest classifier confidence is selected, and its corresponding scale is used to determine the segmentation boundaries in time. We evaluated our formulation in segmenting and recognizing gestures from two different benchmark datasets: the NATOPS dataset of 9,600 gesture instances from a vocabulary of 24 aircraft handling signals, and the CHALEARN dataset of 7,754 gesture instances from a vocabulary of 20 Italian communication gestures. The performance of our method compares favorably with state-of-the-art methods that employ Hidden Markov Models or Hidden Conditional Random Fields on the NATOPS dataset. We conclude with a discussion of the advantages of using our model

    Introducing Geometry in Active Learning for Image Segmentation

    Get PDF
    We propose an Active Learning approach to training a segmentation classifier that exploits geometric priors to streamline the annotation process in 3D image volumes. To this end, we use these priors not only to select voxels most in need of annotation but to guarantee that they lie on 2D planar patch, which makes it much easier to annotate than if they were randomly distributed in the volume. A simplified version of this approach is effective in natural 2D images. We evaluated our approach on Electron Microscopy and Magnetic Resonance image volumes, as well as on natural images. Comparing our approach against several accepted baselines demonstrates a marked performance increase

    Segmentación semántica multiclase en la digitalización del patrimonio mueble utilizando técnicas de aprendizaje profundo

    Get PDF
    [EN] Digitisation processes of movable heritage are becoming increasingly popular to document the artworks stored in our museums. An increasing number of strategies for the three-dimensional (3D) acquisition and modelling of these invaluable assets have been developed in the last few years, to efficiently respond to this documentation need and contribute to deepening the knowledge of the masterpieces investigated constantly by researchers operating in many fieldworks. Nowadays, one of the most effective solutions is represented by the development of image-based techniques, usually connected to a Structure-from-Motion (SfM) photogrammetric approach. However, while the acquisition of the images is relatively rapid, it is the processes connected to the data processing that are very time-consuming and require substantial manual involvement of the operator. The development of deep learning-based strategies can be an effective solution to enhance the level of automatism. In the case of the current research, which has been carried out in the framework of the digitisation of a collection of wooden maquettes stored in the ‘Museo Egizio di Torino’ using a photogrammetric approach, an automatic masking strategy using deep learning techniques is proposed, to increase the level of automatism and therefore, optimise the photogrammetric pipeline. Starting from a manually annotated dataset a neural network has been trained to automatically perform a semantic classification with the aim to isolate the maquettes from the background. The proposed methodology has allowed obtaining automatically segmented masks with a high degree of accuracy. The followed workflow is described (as regards acquisition strategies, dataset processing, and neural network training), and the accuracy of the results is evaluated and discussed. In addition, the possibility of performing a multiclass segmentation on the digital images to recognise different categories of objects in the images and define a semantic hierarchy is proposed to perform automatic classification of different elements in the acquired images.[ES] Los procesos de digitalización del patrimonio mueble son cada vez más populares para documentar las obras de arte almacenadas en nuestros museos. En los últimos años se han desarrollado un número creciente de estrategias de adquisición y modelado tridimensional (3D) de estos activos de valor incalculable, que responden de manera eficiente a esta necesidad de documentación y contribuyen a profundizar en el conocimiento de las obras maestras investigadas constantemente por investigadores que operan en muchos trabajos de campo. Hoy en día, una de las soluciones más efectivas está relacionada con el desarrollo de técnicas basadas en imágenes, generalmente conectadas a un enfoque fotogramétrico de estructura-y-movimiento (SfM). Sin embargo, si bien la adquisición de las imágenes es relativamente rápida, son los procesos relacionados con el procesamiento de los datos los que consumen mucho tiempo y requieren una participación manual sustancial del operador. El desarrollo de estrategias basadas en el aprendizaje profundo puede ser una solución eficaz para mejorar el nivel de automatismo. En el caso de la presente investigación, que se ha llevado a cabo en el marco de la digitalización de una colección de maquetas de madera almacenadas en el 'Museo Egizio di Torino' mediante un enfoque fotogramétrico, se propone una estrategia de enmascaramiento automático mediante técnicas de aprendizaje profundo, que incrementa el nivel de automatismo y por tanto optimiza el flujo fotogramétrico. A partir de un conjunto de datos anotados manualmente, se ha entrenado una red neuronal que realiza automáticamente una clasificación semántica con el objetivo de aislar las maquetas del fondo. La metodología propuesta ha permitido obtener más caras segmentadas automáticamente con alto grado de precisión. Se describe el flujo de trabajo seguido (en cuanto a estrategias de toma, procesamiento del conjuntos de datos y entrenamiento de las redes neuronales), y se evalúa y discute la precisión de los resultados. Además, se propone la posibilidad de realizar una segmentación multiclase sobre las imágenes digitales que permitan reconocer diferentes categorías de objetos en las imágenes y definir una jerarquía semántica que clasifique automáticamente diferentes elementos en la toma de las imágenes.The authors thank Volta® A.I. (and in particular Silvio Revelli) for the contribution to this work and for providing high-end hardware for neural network training. In addition, they would like to thank Alessia Fassone of Museo Egizio di Torino and all the people involved in the B.A.C.K. TO T.H.E. F.U.T.U.RE. project (in particular, Fulvio Rinaudo, who coordinated the Geomatic team). Finally, they wish to express their gratitude to Nannina Spanò and Filiberto Chiabrando for the helpful confrontation during the presented research.Patrucco, G.; Setragno, F. (2021). Multiclass semantic segmentation for digitisation of movable heritage using deep learning techniques. Virtual Archaeology Review. 12(25):85-98. https://doi.org/10.4995/var.2021.15329OJS85981225Adami, A., Balletti, C., Fassi, F., Fregonese, L., Guerra, F., Taffurelli, L., Vernier, P. (2015). The bust of Francesco II Gonzaga: From digital documentation to 3D printing. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, II-5/W3, 9-15. https://doi.org/10.5194/isprsannals-II-5-W3-9-2015Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(12), 2481-2495. https://doi.org/10.1109/TPAMI.2016.2644615Balletti, C., Ballarin, M., & Guerra, F. (2017). 3D printing: state of the art and future perspectives. Journal of Cultural Heritage, 26,172-182. https://doi.org/10.1016/j.culher.2017.02.010Balletti, C., & Ballarin, M. (2019). An application of integrated 3D technologies for replicas in Cultural Heritage. International Journal of Geo-Information, 8(6), 285. https://doi.org/10.3390/ijgi8060285Barbieri, L., Bruno, F., & Muzzupappa, M. (2018). User-centered design of a virtual reality exhibit for archaeological museums. International Journal on Interactive Design and Manufacturing (IJIDeM), 12, 561-571. https://doi.org/10.1007/s12008-017-0414-zCaruana, R., Lawrence, S., & Giles, C. L. (2001). Overfitting in neural nets: Backpropagation, conjugate gradient, and early stopping. Advances in Neural Information Processing Systems (pp. 402-408). https://doi.org/10.1109/IJCNN.2000.857823Cermelli, F., Mancini, M., Bulo, S. R., Ricci, E., & Caputo, B. (2020). Modeling the background for incremental learning in semantic segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 9233-9242. https://doi.org/10.1109/CVPR42600.2020.00925Condorelli, F., Rinaudo, F., Salvadore, F., & Tagliaventi, S. (2020). A neural network approach to detecting lost heritage in historical video. International Journal of Geo-Information, 9(5), 297. https://doi.org/10.3390/ijgi9050297Chiabrando, F., Sammartano, G., Spanò, A., & Spreafico, A. (2019). Hybrid 3D models: When Geomatics innovations meet extensive built heritage complexes. International Journal of Geo-Information, 8(3), 124. https://doi.org/10.3390/ijgi8030124Dall'Asta, E., Bruno, N., Bigliardi, G., Zerbi, A., & Roncella, R.(2016). Photogrammetric techniques for promotion of archaeological Heritage: the Archaeological Museum of Parma (Italy). International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLI-B5, 243-250. https://doi.org/10.5194/isprs-archives-XLI-B5-243-2016Felicetti, A., Paolanti, M., Zingaretti, P., Pierdicca, R., & Malinverni, E. S. (2020). Mo.Se.: Mosaic image segmentation based on deep cascading learning. Virtual Archaeology Review, 12(24), 25-38. https://doi.org/10.4995/var.2021.14179Fiorucci, M., Khoroshiltseva, M., Pontil, M., Traviglia, A., Del Bue, A., & James, S. (2020). Machine Learning for Cultural Heritage: A Survey. Pattern Recognition Letters, 133, 102-108. https://doi.org/10.1016/j.patrec.2020.02.017Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., Villena-Martinez, V., & Garcia-Rodriguez, J. (2017). A survey on deep learning techniques for image and video semantic segmentation. Applied Soft Computing, 70, 41-65. https://doi.org/10.1016/j.asoc.2018.05.018George, D., Xie, X., & Tam, G. K. (2018). 3D mesh segmentation via multi-branch 1D convolutional neural networks. Graphical Models, 96, 1-10. https://doi.org/10.1016/j.gmod.2018.01.001Giuffrida, D., Mollica Nardo, V., Giacobello, F., Adinolfi, O., Mastelloni, M. A., Toscano, G., & Ponterio, R. S. (2019). Combined 3D surveying and Raman Spectroscopy Techniques on artifacts preserved at Archaeological Musem of Lipari. Heritage, 2(3), 2017-2027. https://doi.org/10.3390/heritage2030121Grilli, E., Farella, E. M., Torresani, A., & Remondino, F. (2019). Geometric features analysis for the classification of Cultural Heritage point clouds. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W15, 541-548. https://doi.org/10.5194/isprs-archives-XLII-2-W15-541-2019Grilli, E., Özdemir, E., & Remondino, F. (2019). Application of machine and deep learning strategies for the classification of Heritage point clouds. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-4/W18, 447-454. https://doi.org/10.5194/isprs-archives-XLII-4-W18-447-2019Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.Gu, J., Wang, Z., Kuen, J., Ma., L., Shahroudy, A., Shuai, B., & Chen., T. (2018). Recent advances in convolutional neural networks. Pattern Recognition, 77, 354-377. https://doi.org/10.1016/j.patcog.2017.10.013Guidi, G., Malik, U. S., Frischer, B., Barandoni, C., & Paolucci, F. (2017). The Indiana University-Uffizi project: Metrological challenges and workflow for massive 3D digitization of sculptures. 23rd International Conference on Virtual System & Multimedia (VSMM), 1-8. https://doi.org/10.1109/VSMM.2017.8346268He, T., Shen, C., Tian, Z., Gong, D., Sun, C., & Yan, Y. (2019). Knowledge adaptation for efficient semantic segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 578-587. https://doi.org/10.1109/CVPR.2019.00067Jégou, S., Drozdzal, M., Vazquez, D., Romero, A., & Bengio, Y. (2017). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 11-19). https://doi.org/10.1109/CVPRW.2017.156Kersten, T. P., Tschirschwitz, F., & Deggim, S. (2017). Development of a virtual museum including a 4D presentation of building history in Virtual Reality. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W3, 361-367. https://doi.org/10.5194/isprs-archives-XLII-2-W3-361-2017Knyaz, A. V., Kniaz, V. V., Remondino, F., Zheltov, S. Y., & Gruen, A. (2020). 3D reconstruction of a complex grid structure combining UAS images and deep learning. Remote Sensing, 12(19), 3128. https://doi.org/10.3390/rs12193128Lin, P., Sun, P., Cheng, G., Xie, S., Li, X., & Shi, J. (2020). Graph-guided architecture search for real-time semantic segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 4203-4212. https://doi.org/10.1109/CVPR42600.2020.00426Llamas, J., Lerones, P. M., Medina, R., Zalama, E., & Gómez-García-Bermejo, J. (2017). Classification of architectural heritage images using deep learning techniques. Applied Science, 7(10), 992. https://doi.org/10.3390/app7100992Lo Turco, M., Piumatti, P., Rinaudo, F., Tamborrino, R., & González-Aguilera, D., (2018). B.A.C.K. TO T.H.E. F.U.T.U.RE. − BIM acquisition as cultural key to transfer heritage of ancient Egypt for many uses to many users replayed. In S. Bertocci (Ed.), Programmi Multidisciplinari Per L'internazionalizzazione Della Ricerca. Patrimonio Culturale, Architettura e Paesaggio (pp. 107-109). DIDA Press.Lo Turco, M., Piumatti, P., Rinaudo, F., Calvano, M., Spreafico, A., & Patrucco, G. (2018). The digitisation of museum collections for research, management and enhancement of tangible and intangible heritage. 3rd Digital Heritage International Congress (DigitalHERITAGE) held jointly with 24th International Conference on Virtual Systems & Multimedia (VSMM 2018), San Francisco, CA, USA. https://doi.org/10.1109/DigitalHeritage.2018.8810128Mafrici, N., & Giovannini, E. C. (2020). Digitalizing data: From the historical research to data modelling for a (digital) collection documentation. In M. Lo Turco, E. C. Giovannini, , & N. Mafrici (Eds.), Digital & Documentation. Digital Strategies for Cultural Heritage (Vol. 2, pp. 38-51). Pavia University Press. https://doi.org/10.5194/isprs-archives-XLII-2-W15-519-2019Malik, U. S., Guidi, G. (2018). Massive 3D digitization of sculptures: Methodological approaches for improving efficiency. IOP Conference Series: Material Science and Engineering, 364. https://doi.org/10.1088/1757-899X/364/1/012015Minto, S., & Remondino, F. (2014). Online access and sharing of reality-based 3D models. SCIRES-IT-SCIentific RESearch and Information Technology, 4(2), 17-28. http://doi.org/10.2423/i22394303v4n2p17Patrucco, G., Chiabrando, F., Dondi, P, & Malagodi, M. (2018). Image and range-based 3D acquisition and modeling of popular musical instruments. Proceedings from the Document Academy, 5(2), 9. https://doi.org/10.35492/docam/5/2/9Patrucco, G., Rinaudo, F., & Spreafico, A. (2019). A new handheld scanner for 3D survey of small artifacts: The Stonex F6. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W15, 895-901. https://doi.org/10.5194/isprs-archives-XLII-2-W15-895-2019Pierdicca, R., Paolanti, M., Matrone, F., Martini, M., Morbidoni, C., Malinverni, E. S., Frontoni, E., & Lingua, A. M. (2020). Point cloud semantic segmentation using a deep learning framework for Cultural Heritage. Remote Sensing, 12(6), 1005. https://doi.org/10.3390/rs12061005Salvador-García, E., Viñals, M. J., & García-Valldecabres, J. L. (2020). Potential of HBIM to improve the efficiency of visitor flow management in Heritage sites. Towards smart heritage management. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLIV-M-1-2020, 451-456. https://doi.org/10.5194/isprs-archives-XLIV-M-1-2020-451-2020Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 1-48. https://doi.org/10.1186/s40537-019-0197-0Stathopoulou, E. K., & Remondino, F. (2019). Semantic photogrammetry: Boosting image-based 3D reconstruction with semantic labeling. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2/W9, 685-690. https://doi.org/10.5194/isprs-archives-XLII-2-W9-685-2019UNESCO. (1979). Recommendation for the Protection of Movable Cultural Property, Records of the General Conference, 20th Session, I: Resolutions. Paris: UNESCO.Vargas, R., Mosavi, A., & Ruiz, R. (2018). Deep learning: A review. Advances in Intelligent Systems and Computing, 29(8), 232-244. https://doi.org/10.20944/PREPRINTS201810.0218.V1Yazan, E., & Talu, M. F. (2017). Comparison of the stochastic gradient descent based optimization techniques. 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), 1-5. https://doi.org/10.1109/IDAP.2017.809029

    Dense Semantic Image Segmentation with Objects and Attributes

    Full text link
    The concepts of objects and attributes are both impor-tant for describing images precisely, since verbal descrip-tions often contain both adjectives and nouns (e.g. ‘I see a shiny red chair’). In this paper, we formulate the prob-lem of joint visual attribute and object class image seg-mentation as a dense multi-labelling problem, where each pixel in an image can be associated with both an object-class and a set of visual attributes labels. In order to learn the label correlations, we adopt a boosting-based piecewise training approach with respect to the visual appearance and co-occurrence cues. We use a filtering-based mean-field approximation approach for efficient joint inference. Further, we develop a hierarchical model to incorporate region-level object and attribute information. Experiments on the aPASCAL, CORE and attribute augmented NYU in-door scenes datasets show that the proposed approach is able to achieve state-of-the-art results. 1

    Classifying Cinematographic Shot Types

    Get PDF
    3noIn film-making, the distance from the camera to the subject greatly effects the narrative power of a shot. By the alternate use of Long shots, Medium and Close-ups the director is able to provide emphasis on key passages of the filmed scene. In this work we investigate five different inherent characteristics of single shots which contain indirect information about camera distance, without the need to recover the 3D structure of the scene. Specifically, 2D scene geometric composition, frame colour intensity properties, motion distribution, spectral amplitude and shot content are considered for classifying shots into three main categories. In the experimental phase, we demonstrate the validity of the framework and effectiveness of the proposed descriptors by classifying a significant dataset of movie shots using C4.5 Decision Trees and Support Vector Machines. After comparing the performance of the statistical classifiers using the combined descriptor set, we test the ability of each single feature in distinguishing shot types.Published on-line Nov. 2011; Print publication Jan. 2013partially_openpartially_openCanini L.; Benini S.; Leonardi R.Canini, Luca; Benini, Sergio; Leonardi, Riccard
    • …
    corecore