38,524 research outputs found

    Motion detection in astronomical and ice floe images

    Get PDF
    Two approaches are presented for establishing correspondence between small areas in pairs of successive images for motion detection. The first one, based on local correlation, is used on a pair of successive Voyager images of the Jupiter which differ mainly in locally variable translations. This algorithm is implemented on a sequential machine (VAX 780) as well as the Massively Parallel Processor (MPP). In the case of the sequential algorithm, the pixel correspondence or match is computed on a sparse grid of points using nonoverlapping windows (typically 11 x 11) by local correlations over a predetermined search area. The displacement of the corresponding pixels in the two images is called the disparities to cubic surfaces. The disparities at points where the error between the computed values and the surface values exceeds a particular threshold are replaced by the surface values. A bilinear interpolation is then used to estimate disparities at all other pixels between the grid points. When this algorithm was applied at the red spot in the Jupiter image, the rotating velocity field of the storm was determined. The second method of motion detection is applicable to pairs of images in which corresponding areas can experience considerable translation as well as rotation

    TEKNIK ESTIMASI DAN KOMPENSASI GERAK PADA VIDEO CODING FGS (Fine Granularity Scalability)

    Get PDF
    Motion estimation is a process to determine the movement of an object on video sequential. The movement of objects is known as motion vector. A motion vector indicates a shift point between the current frame with the reference frame. Of motion vector is obtained, it would seem that the movement of the dots between the observed frame. In this study using the algorithm block maching SAD (Sum of Absolute Different), the search process is done per pixel. To determine the quality of the movement of objects in each frame interpolation is obtained by calculating the PSNR value. PSNR values range from 35 to 40 dB. From the research conducted using the 90 frame interpolation obtained PSNR value decreases.   &nbsp

    The ADS general-purpose optimization program

    Get PDF
    The mathematical statement of the general nonlinear optimization problem is given as follows: find the vector of design variables, X, that will minimize f(X) subject to G sub J (x) + or - 0 j=1,m H sub K hk(X) = 0 k=1,l X Lower I approx less than X sub I approx. less than X U over I i = 1,N. The vector of design variables, X, includes all those variables which may be changed by the ADS program in order to arrive at the optimum design. The objective function F(X) to be minimized may be weight, cost or some other performance measure. If the objective is to be maximized, this is accomplished by minimizing -F(X). The inequality constraints include limits on stress, deformation, aeroelastic response or controllability, as examples, and may be nonlinear implicit functions of the design variables, X. The equality constraints h sub k(X) represent conditions that must be satisfied precisely for the design to be acceptable. Equality constraints are not fully operational in version 1.0 of the ADS program, although they are available in the Augmented Lagrange Multiplier method. The side constraints given by the last equation are used to directly limit the region of search for the optimum. The ADS program will never consider a design which is not within these limits

    Parallel processing can be harmful: The unusual behavior of interpolation search

    Get PDF
    AbstractSeveral articles have noted the usefulness of a retrieval algorithm called sequential interpolation search, and Yao and Yao have proven a lower bound log logN−O(1), showing this algorithm is actually optimal up to an additive constant on unindexed files of sizeNgenerated by the uniform probability distribution. We generalize the latter to show log logN− log logP−O(1) lower bounds the complexity of any retrieval algorithm withPparallel processors for searching an unindexed file of sizeN. This result is surprising because we also show how to obtain an upper bound that matches the lower bound up to an additive constant with a procedure that actually usesno parallel processingoutside its last iteration (at which time our proposal turns onPprocessors in parallel). Our first theorem therefore states thatparallel processing before the literally last iterationin the search of an unindexed ordered file hasnearly no usefulness. Two further surprising facts are that the preceding result holds even when communication between the parallel processing units involvesno delayand that the parallel algorithms are actuallyinherently slowerthan their sequential counterparts when each invocation of the SIMD machine invokes a communication step withany typeof nonzerodelay. The presentation in the first two chapters of this paper is quite informal, so that the reader can quickly grasp the underlying intuition

    A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation

    Full text link
    Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee local convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee local convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example

    Neural Network Memory Architectures for Autonomous Robot Navigation

    Full text link
    This paper highlights the significance of including memory structures in neural networks when the latter are used to learn perception-action loops for autonomous robot navigation. Traditional navigation approaches rely on global maps of the environment to overcome cul-de-sacs and plan feasible motions. Yet, maintaining an accurate global map may be challenging in real-world settings. A possible way to mitigate this limitation is to use learning techniques that forgo hand-engineered map representations and infer appropriate control responses directly from sensed information. An important but unexplored aspect of such approaches is the effect of memory on their performance. This work is a first thorough study of memory structures for deep-neural-network-based robot navigation, and offers novel tools to train such networks from supervision and quantify their ability to generalize to unseen scenarios. We analyze the separation and generalization abilities of feedforward, long short-term memory, and differentiable neural computer networks. We introduce a new method to evaluate the generalization ability by estimating the VC-dimension of networks with a final linear readout layer. We validate that the VC estimates are good predictors of actual test performance. The reported method can be applied to deep learning problems beyond robotics
    • …
    corecore