4,931 research outputs found

    Hi4D: 4D Instance Segmentation of Close Human Interaction

    Full text link
    We propose Hi4D, a method and dataset for the automatic analysis of physically close human-human interaction under prolonged contact. Robustly disentangling several in-contact subjects is a challenging task due to occlusions and complex shapes. Hence, existing multi-view systems typically fuse 3D surfaces of close subjects into a single, connected mesh. To address this issue we leverage i) individually fitted neural implicit avatars; ii) an alternating optimization scheme that refines pose and surface through periods of close proximity; and iii) thus segment the fused raw scans into individual instances. From these instances we compile Hi4D dataset of 4D textured scans of 20 subject pairs, 100 sequences, and a total of more than 11K frames. Hi4D contains rich interaction-centric annotations in 2D and 3D alongside accurately registered parametric body models. We define varied human pose and shape estimation tasks on this dataset and provide results from state-of-the-art methods on these benchmarks.Comment: Project page: https://yifeiyin04.github.io/Hi4D

    Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends

    Get PDF
    Aiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state-of-the-art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher-level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation

    Annals [...].

    Get PDF
    Pedometrics: innovation in tropics; Legacy data: how turn it useful?; Advances in soil sensing; Pedometric guidelines to systematic soil surveys.Evento online. Coordenado por: Waldir de Carvalho Junior, Helena Saraiva Koenow Pinheiro, Ricardo Simão Diniz Dalmolin

    Actes du troisième colloque annuel du département d’anthropologie, Université de Montréal 2021

    Full text link
    Regards croisés sur l’esprit est le troisième volume des Actes du Colloque Annuel du Département d’Anthropologie de l’Université de Montréal (CADA). Le volume rassemble cinq des treize communications présentées lors du colloque diffusé exceptionnellement en ligne du 22 au 25mars 2021 en raison de la pandémie de la Covid-19. Cet événement a montré que les conférences virtuelles représentent une alternative qui garantit le dynamisme de la recherche étudiante.Le monde des esprits chez les Maseual-Nahua de la Sierra Norte de Puebla, Mexique : le traitement de l’épouvante (nemoujtil) comme révélateur / Pierre Beaucage ; Quand le chercheur voit des esprits /Deirdre Meintel ; L’Esprit saint : d’un espace d’intersubjectivité à un acteur social / Guillaume Boucher ; Conscience de la mort : pensée symbolique, rites funéraires et quête d’immortalité / Émilie Lessard et Mélissa Bernard ; Aesthetics Before Art / Thomas Wyn

    Robustness against adversarial attacks on deep neural networks

    Get PDF
    While deep neural networks have been successfully applied in several different domains, they exhibit vulnerabilities to artificially-crafted perturbations in data. Moreover, these perturbations have been shown to be transferable across different networks where the same perturbations can be transferred between different models. In response to this problem, many robust learning approaches have emerged. Adversarial training is regarded as a mainstream approach to enhance the robustness of deep neural networks with respect to norm-constrained perturbations. However, adversarial training requires a large number of perturbed examples (e.g., over 100,000 examples are required for MNIST dataset) trained on the deep neural networks before robustness can be considerably enhanced. This is problematic due to the large computational cost of obtaining attacks. Developing computationally effective approaches while retaining robustness against norm-constrained perturbations remains a challenge in the literature. In this research we present two novel robust training algorithms based on Monte-Carlo Tree Search (MCTS) [1] to enhance robustness under norm-constrained perturbations [2, 3]. The first algorithm searches potential candidates with Scale Invariant Feature Transform method and makes decisions with Monte-Carlo Tree Search method [2]. The second algorithm adopts Decision Tree Search method (DTS) to accelerate the search process while maintaining efficiency [3]. Our overarching objective is to provide computationally effective approaches that can be deployed to train deep neural networks robust against perturbations in data. We illustrate the robustness with these algorithms by studying the resistances to adversarial examples obtained in the context of the MNIST and CIFAR10 datasets. For MNIST, the results showed an average training efforts saving of 21.1\% when compared to Projected Gradient Descent (PGD) and 28.3\% when compared to Fast Gradient Sign Methods (FGSM). For CIFAR10, we obtained an average improvement of efficiency of 9.8\% compared to PGD and 13.8\% compared to FGSM. The results suggest that these two methods here introduced are not only robust to norm-constrained perturbations but also efficient during training. In regards to transferability of defences, our experiments [4] reveal that across different network architectures, across a variety of attack methods from white-box to black-box and across various datasets including MNIST and CIFAR10, our algorithms outperform other state-of-the-art methods, e.g., PGD and FGSM. Furthermore, the derived attacks and robust models obtained on our framework are reusable in the sense that the same norm-constrained perturbations can facilitate robust training across different networks. Lastly, we investigate the robustness of intra-technique and cross-technique transferability and the relations with different impact factors from adversarial strength to network capacity. The results suggest that known attacks on the resulting models are less transferable than those models trained by other state-of-the-art attack algorithms. Our results suggest that exploiting these tree search frameworks can result in significant improvements in the robustness of deep neural networks while saving computational cost on robust training. This paves the way for several future directions, both algorithmic and theoretical, as well as numerous applications to establish the robustness of deep neural networks with increasing trust and safety.Open Acces

    Joint optimization of depth and ego-motion for intelligent autonomous vehicles

    Get PDF
    The three-dimensional (3D) perception of autonomous vehicles is crucial for localization and analysis of the driving environment, while it involves massive computing resources for deep learning, which can't be provided by vehicle-mounted devices. This requires the use of seamless, reliable, and efficient massive connections provided by the 6G network for computing in the cloud. In this paper, we propose a novel deep learning framework with 6G enabled transport system for joint optimization of depth and ego-motion estimation, which is an important task in 3D perception for autonomous driving. A novel loss based on feature map and quadtree is proposed, which uses feature value loss with quadtree coding instead of photometric loss to merge the feature information at the texture-less region. Besides, we also propose a novel multi-level V-shaped residual network to estimate the depths of the image, which combines the advantages of V-shaped network and residual network, and solves the problem of poor feature extraction results that may be caused by the simple fusion of low-level and high-level features. Lastly, to alleviate the influence of image noise on pose estimation, we propose a number of parallel sub-networks that use RGB image and its feature map as the input of the network. Experimental results show that our method significantly improves the quality of the depth map and the localization accuracy and achieves the state-of-the-art performance

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions

    Resilience of power grids and other supply networks: structural stability, cascading failures and optimal topologies

    Get PDF
    The consequences of the climate crisis are already present and can be expected to become more severe in the future. To mitigate long-term consequences, a major part of the world's countries has committed to limit the temperature rise via the Paris Agreement in the year 2015. To achieve this goal, the energy production needs to decarbonise, which results in fundamental changes in many societal aspects. In particular, the electrical power production is shifting from fossil fuels to renewable energy sources to limit greenhouse gas emissions. The electrical power transmission grid plays a crucial role in this transformation. Notably, the storage and long-distance transport of electrical power becomes increasingly important, since variable renewable energy sources (VRES) are subjected to external factors such as weather conditions and their power production is therefore regionally and temporally diverse. As a result, the transmission grid experiences higher loadings and bottlenecks appear. In a highly-loaded grid, a single transmission line or generator outage can trigger overloads on other components via flow rerouting. These may in turn trigger additional rerouting and overloads, until, finally, parts of the grid become disconnected. Such cascading failures can result in large-scale power blackouts, which bear enormous risks, as almost all infrastructures and economic activities depend on a reliable supply of electric power. Thus, it is essential to understand how networks react to local failures, how flow is rerouted after failures and how cascades emerge and spread in different power transmission grids to ensure a stable power grid operation. In this thesis, I examine how the network topology shapes the resilience of power grids and other supply networks. First, I analyse how flow is rerouted after the failure of a single or a few links and derive mathematically rigorous results on the decay of flow changes with different network-based distance measures. Furthermore, I demonstrate that the impact of single link failures follows a universal statistics throughout different topologies and introduce a stochastic model for cascading failures that incorporates crucial aspects of flow redistribution. Based on this improved understanding of link failures, I propose network modifications that attenuate or completely suppress the impact of link failures in parts of the network and thereby significantly reduce the risk of cascading failures. In a next step, I compare the topological characteristics of different kinds of supply networks to analyse how the trade-off between efficiency and resilience determines the structure of optimal supply networks. Finally, I examine what shapes the risk of incurring large scale cascading failures in a realistic power system model to assess the effects of the energy transition in Europe
    • …
    corecore