428,888 research outputs found

    Stationary viscoelastic wave fields generated by scalar wave functions

    Get PDF
    The usual Helmholtz decomposition gives a decomposition of any vector valued function into a sum of gradient of a scalar function and rotation of a vector valued function under some mild condition. In this paper we show that the vector valued function of the second term i.e. the divergence free part of this decomposition can be further decomposed into a sum of a vector valued function polarized in one component and the rotation of a vector valued function also polarized in the same component. Hence the divergence free part only depends on two scalar functions. Further we show the so called completeness of representation associated to this decomposition for the stationary wave field of a homogeneous, isotropic viscoelastic medium. That is by applying this decomposition to this wave field, we can show that each of these three scalar functions satisfies a Helmholtz equation. Our completeness of representation is useful for solving boundary value problem in a cylindrical domain for several partial differential equations of systems in mathematical physics such as stationary isotropic homogeneous elastic/viscoelastic equations of system and stationary isotropic homogeneous Maxwell equations of system. As an example, by using this completeness of representation, we give the solution formula for torsional deformation of a pendulum of cylindrical shaped homogeneous isotropic viscoelastic medium

    Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training

    Get PDF
    We consider the convex quadratic linearly constrained problem with bounded variables and with huge and dense Hessian matrix that arises in many applications such as the training problem of bias support vector machines. We propose a decomposition algorithmic scheme suitable to parallel implementations and we prove global convergence under suitable conditions. Focusing on support vector machines training, we outline how these assumptions can be satisfied in practice and we suggest various specific implementations. Extensions of the theoretical results to general linearly constrained problem are provided. We included numerical results on support vector machines with the aim of showing the viability and the effectiveness of the proposed scheme

    Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Get PDF
    Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-M&N schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust

    Parallel growing and training of neural networks using output parallelism

    Get PDF
    In order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of sub-problems. In this paper, we propose a simple neural network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one sub-problem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem’s output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems
    corecore