15,863 research outputs found

    The three happinesses and the happiness formula : lessons from the Hong Kong evidence

    Full text link
    The paper draws on results of surveys conducted in Hong Kong since 2005 to test the three happinesses theory and the happiness formula. The three happinesses theory suggests that the positive or negative feelings as well as the strengths of such affective feelings associated with an activity would depend on the state of mind of the individual engaging in the activity, and this would depend on prospective as well as retrospective considerations. The three happinesses are based on a whole life experience, the capability for which can be switched on using the happiness formula consisting of Love, Insight, Fortitude, and Engagement

    K0ΛK^0\Lambda Photoproduction with Nucleon Resonances

    Full text link
    We investigate the reaction mechanism of K0ΛK^0 \Lambda photoproduction off the neutron target, i.e., γnK0Λ\gamma n \to K^0 \Lambda, in the range of W1.62.2W\approx 1.6-2.2 GeV. We employ an effective Lagrangian method at the tree-level Born approximation combining with a Regge approach. As a background, the KK^*-Reggeon trajectory is taken into account in the tt channel and Λ\Lambda and Σ\Sigma hyperons in the uu-channel Feynman diagram. In addition, the role of various nucleon resonances listed in the Particle Data Group (PDG) is carefully scrutinized in the ss channel where the resonance parameters are extracted from the experimental data and constituent quark model. We present our numerical results of the total and differential cross sections and compare them with the recent CLAS data. The effect of the narrow nucleon resonance N(1685,1/2+)N(1685,1/2^+) on cross sections is studied in detail and it turns out that its existence is essential in K0ΛK^0 \Lambda photoproduction to reproduce the CLAS data.Comment: 4 pages, 3 figures, Proceedings of "8th International Conference on Quarks and Nuclear Physics (QNP2018)", November 13-17, 2018, Tsukuba, Japa

    URNet : User-Resizable Residual Networks with Conditional Gating Module

    Full text link
    Convolutional Neural Networks are widely used to process spatial scenes, but their computational cost is fixed and depends on the structure of the network used. There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image. However, since a user can not control the size of the learned model, it is difficult to respond dynamically if the amount of service requests suddenly increases. We propose User-Resizable Residual Networks (URNet), which allows users to adjust the scale of the network as needed during evaluation. URNet includes Conditional Gating Module (CGM) that determines the use of each residual block according to the input image and the desired scale. CGM is trained in a supervised manner using the newly proposed scale loss and its corresponding training methods. URNet can control the amount of computation according to user's demand without degrading the accuracy significantly. It can also be used as a general compression method by fixing the scale size during training. In the experiments on ImageNet, URNet based on ResNet-101 maintains the accuracy of the baseline even when resizing it to approximately 80% of the original network, and demonstrates only about 1% accuracy degradation when using about 65% of the computation.Comment: 12 page

    The housing ladder and Hong Kong housing market\u27s boom and bust cycle

    Full text link
    This paper presents evidence, based on the recent Hong Kong experience, for the existence of a “housing ladder effect.” An increase of housing equity at the bottom of the ladder tends to translate into a trading up activity that will both increase housing market turnover and buoy up the entire housing market. Based on a natural experiment through the introduction of a public housing privatization scheme, this papers presents evidence supporting this story using a logit model and a price-volume causality test
    corecore