587 research outputs found
Solving a variational image restoration model which involves L∞ constraints
In this paper, we seek a solution to linear inverse problems arising in image restoration in terms of a recently posed optimization problem which combines total variation minimization and wavelet-thresholding ideas. The resulting nonlinear programming task is solved via a dual Uzawa method in its general form, leading to an efficient and general algorithm which allows for very good structure-preserving reconstructions. Along with a theoretical study of the algorithm, the paper details some aspects of the implementation, discusses the numerical convergence and eventually displays a few images obtained for some difficult restoration tasks
Computing the Fundamental Group in Digital Space
International audienceAs its analogue in the continuous framework,the digital fundamental grouprepresents a major information on the topology of discrete objects.However, the fundamental group is an abstract information and cannotdirectly be encoded in a computer using its definition.A classical mathematical way to encode a discrete groupis to find a \emph{presentation} of this group.In this paper, we construct a presentation for the fundamental groupof an arbitrary graph, and a finite presentation (hence encodablein the memory of a computer) ofany subset of Z^3. This presentation can becomputed by an efficient algorithm
Non-heuristic reduction of the graph in graph-cut optimization
During the last ten years, graph cuts had a growing impact in shape optimization. In particular, they are commonly used in applications of shape optimization such as image processing, computer vision and computer graphics. Their success is due to their ability to efficiently solve (apparently) difficult shape optimization problems which typically involve the perimeter of the shape. Nevertheless, solving problems with a large number of variables remains computationally expensive and requires a high memory usage since underlying graphs sometimes involve billion of nodes and even more edges. Several strategies have been proposed in the literature to improve graph-cuts in this regards. In this paper, we give a formal statement which expresses that a simple and local test performed on every node before its construction permits to avoid the construction of useless nodes for the graphs typically encountered in image processing and vision. A useless node is such that the value of the maximum flow in the graph does not change when removing the node from the graph. Such a test therefore permits to limit the construction of the graph to a band of useful nodes surrounding the final cut
Normals estimation for digital surfaces based on convolutions
International audienceIn this paper, we present a method that we call on-surface convolution which extends the classical notion of a 2D digital filter to the case of digital surfaces (following the cuberille model). We also define an averaging mask with local support which, when applied with the iterated convolution operator, behaves like an averaging with large support. The interesting property of the latter averaging is the way the resulting weights are distributed: given a digital surface obtained by discretization of a differentiable surface of R^3 , the masks isocurves are close to the Riemannian isodistance curves from the center of the mask. We eventually use the iterated averaging followed by convolutions with differentiation masks to estimate partial derivatives and then normal vectors over a surface. The number of iterations required to achieve a good estimate is determined experimentally on digitized spheres and tori. The precision of the normal estimation is also investigated according to the digitization step
A Predual Proximal Point Algorithm solving a Non Negative Basis Pursuit Denoising model
International audienceThis paper develops an implementation of a Predual Proximal Point Algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l2 norm of the residual, instead of penalizing it. The PPPA solves the predual of the problem with a Proximal Point Algorithm (PPA). Moreover, the minimization that needs to be performed at each iteration of PPA is solved with a dual method. We can prove that these dual variables converge to a solution of the initial problem. Our analysis proves that we turn a constrained non differentiable con- vex problem into a short sequence of nice concave maximization problems. By nice, we mean that the functions which are maximized are differen- tiable and their gradient is Lipschitz. The algorithm is easy to implement, easier to tune and more general than the algorithms found in the literature. In particular, it can be ap- plied to the Basis Pursuit Denoising (BPDN) and the Non Negative Basis Pursuit Denoising (NNBPDN) and it does not make any assumption on the dictionary. We prove its convergence to the set of solutions of the model and provide some convergence rates. Experiments on image approximation show that the performances of the PPPA are at the current state of the art for the BPDN
Estimating the probability law of the codelength as a function of the approximation error in image compression
International audienceAfter a recollection on compression through a projection onto a polyhedral set (which generalizes the compression by coordinates quantization), we express, in this framework, the probability that an image is coded with coefficients as an explicit function of the approximation error
Trade Shocks and Local Employment Multipliers: Evidence from France
In this paper, I develop a simple model of spatial equilibrium to investigate theoretically what determines the sign and magnitude of ''local multipliers' (defined as elasticity of employment in the non-tradable sector with respect to increase in employment in the tradable sector). I estimate the local multiplier with data for France and focus on the heterogeneity of local multipliers across local labor markets and across sectors. In order to cope with possible endogeneity issues, I use a shift-share instrument (Bartik,1991), already used in the literature, and build another instrument based on trade-shocks (in the spirit of Autor et al. (2012) and referred to as ''import-per-worker' index (IPW) below). I find an average elasticity of 0.36 (1.25 job-to-job effect), a result considerably higher than previous studies of European economies and similar to previous findings with the US data (Moretti and Thulin, 2012). The two instruments give broadly comparable results. I find that the local multiplier is larger for local labor markets with high initial unemployment rate. Similar results hold when using hours supplied instead of mere headcount, suggesting that ?tight' labor markets fail to adjust through the intensive margin. On the other hand, when using overall payroll, the difference between slack and tight labor markets cease to be significant. This suggests that increase in labor demand push up wages in tight labor markets while the adjustment occurs purely through quantities purely in slack labor markets. Hence the incidence of shifts in local labor demand depends on initial labor market conditions. Finally, I find evidence of higher local job multipliers associated with high-tech sector and show that when estimating the local earning multiplier the difference becomes insignificant, suggesting that more intensive pecuniary externality drives that sectoral heterogeneity and not some sort of technological externality
A Discrete Radiosity Method
International audienceWe present a completely new principle of computation of radiosity values in a 3D scene. The method is based on a voxel approximation of the objects, and all occlusion calculations involve only integer arithmetics operation. The method is proved to converge. Some experimental results are presented
A Theory for Integer Only Numerical Analysis (Draft )
This draft is to be presented at the 14 th International Conference on p−adic Analysis, in Aurillac, France, July 2016. The final version of this draft is to be submitted soon afterwards. The motivation for this work, as well as a basic 1D version, can be found in: Henri Alex Esbelin and Remy Malgouyres. Sparse convolution-based digital derivatives , fast estimation for noisy signals and approximation results, in Theoretical Computer Science 624: 2-24 (2016)
- …
