3,779 research outputs found

    Two-hole ground state wavefunction: Non-BCS pairing in a tt-JJ two-leg ladder system

    Full text link
    Superconductivity is usually described in the framework of the Bardeen-Cooper-Schrieffer (BCS) wavefunction, which even includes the resonating-valence-bond (RVB) wavefunction proposed for the high-temperature superconductivity in the cuprate. A natural question is \emph{if} any fundamental physics could be possibly missed by applying such a scheme to strongly correlated systems. Here we study the pairing wavefunction of two holes injected into a Mott insulator/antiferromagnet in a two-leg ladder using variational Monte Carlo (VMC) approach. By comparing with density matrix renormalization group (DMRG) calculation, we show that a conventional BCS or RVB pairing of the doped holes makes qualitatively wrong predictions and is incompatible with the fundamental pairing force in the tt-JJ model, which is kinetic-energy-driven by nature. By contrast, a non-BCS-like wavefunction incorporating such novel effect will result in a substantially enhanced pairing strength and improved ground state energy as compared to the DMRG results. We argue that the non-BCS form of such a new ground state wavefunction is essential to describe a doped Mott antiferromagnet at finite doping.Comment: 11 pages, 5 figure

    Accelerating Deep Learning with Shrinkage and Recall

    Full text link
    Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and the architecture size is large. Inspired from the shrinking technique used in accelerating computation of Support Vector Machines (SVM) algorithm and screening technique used in LASSO, we propose a shrinking Deep Learning with recall (sDLr) approach to speed up deep learning computation. We experiment shrinking Deep Learning with recall (sDLr) using Deep Neural Network (DNN), Deep Belief Network (DBN) and Convolution Neural Network (CNN) on 4 data sets. Results show that the speedup using shrinking Deep Learning with recall (sDLr) can reach more than 2.0 while still giving competitive classification performance.Comment: The 22nd IEEE International Conference on Parallel and Distributed Systems (ICPADS 2016

    Asynchronous Distributed Semi-Stochastic Gradient Optimization

    Full text link
    With the recent proliferation of large-scale learning problems,there have been a lot of interest on distributed machine learning algorithms, particularly those that are based on stochastic gradient descent (SGD) and its variants. However, existing algorithms either suffer from slow convergence due to the inherent variance of stochastic gradients, or have a fast linear convergence rate but at the expense of poorer solution quality. In this paper, we combine their merits by proposing a fast distributed asynchronous SGD-based algorithm with variance reduction. A constant learning rate can be used, and it is also guaranteed to converge linearly to the optimal solution. Experiments on the Google Cloud Computing Platform demonstrate that the proposed algorithm outperforms state-of-the-art distributed asynchronous algorithms in terms of both wall clock time and solution quality

    Financial Instruments for Attracting Investments in the Real Estate Market

    Get PDF
    Subject of the study: REIT (Real Estate Investment Trust) as equity investment instrument issued by management companies that are involved in the purchase, maintenance and construction of new real estate, and can also buy mortgage-backed securities from banks.The object of the study is the analysis (Real Estate Investment Trust) real estate market in China.The purpose of the work is to draw a conclusion about the attractiveness of investment in real estate in modern conditions relative to other tools and to identify the development prospects of REITs in ChinaResearch methods: descriptive-analytical, comparative-comparative, statistical, formant, contextual analysis methods.This thesis finds that equity REITs are still very immature and are not widely accepted by mass investors in China. The thesis puts forward three suggestions for apartment REITs in China: to streamline the REIT structure and avoid excessive complexity and opacity; to promote information transparency and regulated disclosure mechanism; to encourage and protect equity REIT investors by constraining REITs from taking on too much debt. Finally, the thesis concludes that REITs, as shown by US experience, should be regarded as a great opportunity to cultivate Chinese investors’ confidence in the stock market with its very simple and plain-vanilla structure. With the Chinese government’s strong ambition to increase housing affordability and to deleverage the economy, we foresee continuous legislative breakthroughs and more systematic improvements in the REIT field.Novelty elements: the current apartment operators are using REITs as debt-financing channels instead of real equity financing. As a result, the leverage level of the real estate sector may not really be decreased at all, but rather increased. With regard to this, the overall leverage level of the emerging apartment REIT companies should be a key factor that must be monitored to embody REIT as an equity investment vehicle.This work is dedicated to solve a number of difficulties in relevant fields of economy

    Fast Nonsmooth Regularized Risk Minimization with Continuation

    Full text link
    In regularized risk minimization, the associated optimization problem becomes particularly difficult when both the loss and regularizer are nonsmooth. Existing approaches either have slow or unclear convergence properties, are restricted to limited problem subclasses, or require careful setting of a smoothing parameter. In this paper, we propose a continuation algorithm that is applicable to a large class of nonsmooth regularized risk minimization problems, can be flexibly used with a number of existing solvers for the underlying smoothed subproblem, and with convergence results on the whole algorithm rather than just one of its subproblems. In particular, when accelerated solvers are used, the proposed algorithm achieves the fastest known rates of O(1/T2)O(1/T^2) on strongly convex problems, and O(1/T)O(1/T) on general convex problems. Experiments on nonsmooth classification and regression tasks demonstrate that the proposed algorithm outperforms the state-of-the-art.Comment: AAAI-201
    • …
    corecore