5,802 research outputs found

    Cournot Competition Yields Spatial Avoiding Competition in Groups

    Get PDF
    This paper characterizes the properties of equilibrium location patterns in an Anderson-Neven-Pal model and uses these characteristics to comprehensively find the subgame perfect Nash equilibria, most of which are not yet found in the literature. Since the external competition effect may be exactly canceled out, or internal competition strictly dominates external competition, or the internal competition effect is consistent with the external competition effect, therefore without any externality and prior collusion, a competitive group structure may form endogenously in equilibrium and firms tend to avoid competition inside each group. The analyses of an Anderson-Neven-Pal model are instructive in studying the conditions for a capacity to implement a ``Nash combination."Cournot competition; Spatial competition; Nash equilibrium

    Secondary Caries

    Get PDF

    A discontinuity and cusp capturing PINN for Stokes interface problems with discontinuous viscosity and singular forces

    Full text link
    In this paper, we present a discontinuity and cusp capturing physics-informed neural network (PINN) to solve Stokes equations with a piecewise-constant viscosity and singular force along an interface. We first reformulate the governing equations in each fluid domain separately and replace the singular force effect with the traction balance equation between solutions in two sides along the interface. Since the pressure is discontinuous and the velocity has discontinuous derivatives across the interface, we hereby use a network consisting of two fully-connected sub-networks that approximate the pressure and velocity, respectively. The two sub-networks share the same primary coordinate input arguments but with different augmented feature inputs. These two augmented inputs provide the interface information, so we assume that a level set function is given and its zero level set indicates the position of the interface. The pressure sub-network uses an indicator function as an augmented input to capture the function discontinuity, while the velocity sub-network uses a cusp-enforced level set function to capture the derivative discontinuities via the traction balance equation. We perform a series of numerical experiments to solve two- and three-dimensional Stokes interface problems and perform an accuracy comparison with the augmented immersed interface methods in literature. Our results indicate that even a shallow network with a moderate number of neurons and sufficient training data points can achieve prediction accuracy comparable to that of immersed interface methods

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and ∣∣X∣∣∗||X||_* and ∣∣X∣∣F||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing ∣∣x∣∣1||x||_1 and ∣∣X∣∣∗||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing ∣∣x∣∣1||x||_1 almost whenever α≥10∣∣x0∣∣∞\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2 and minimizing ∣∣X∣∣∗||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α≥10∣∣X0∣∣2\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author

    Depth-aware neural style transfer

    Get PDF
    Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by Johnson et al. trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer

    Development of malignancy after treatment of idiopathic membranous nephropathy

    Get PDF
    • …
    corecore