219,951 research outputs found

    Measurement of the production branching ratios following nuclear muon capture for palladium isotopes using the in-beam activation method

    Full text link
    Background: The energy distribution of excited states populated by the nuclear muon capture reaction can facilitate an understanding of the reaction mechanism; however, experimental data are fairly sparse. Purpose: We developed a new methodology, called the in-beam activation method, to measure the production probability of residual nuclei by muon capture. For the first application of the new method, we have measured muon-induced activation of five isotopically-enriched palladium targets. Methods: The experiment was conducted at the RIKEN-RAL muon facility of the Rutherford Appleton Facility in the UK. The pulsed muon beam impinged on the palladium targets and gamma rays from the beta and isomeric decays from the reaction residues were measured using high-purity germanium detectors in both the in-beam and offline setups. Results: The production branching ratios of the residual nuclei of muon capture for five palladium isotopes with mass numbers A = 104, 105, 106, 108, and 110 were obtained. The results were compared with a model calculation using the particle and heavy ion transport system (PHITS) code. The model calculation well reproduces the experimental data. Conclusion: For the first time, this study provides experimental data on the distribution of production branching ratios without any theoretical estimation or assumptions in the interpretation of the data analysisComment: 20 pages, 11 figure

    On the Impact of Financial Inclusion on Financial Stability and Inequality: The Role of Macroprudential Policies

    Get PDF
    Financial Inclusion - access to financial products by households and firms - is one of the main albeit challenging priorities, both for Advanced Economies (AEs) as well as Emerging Markets (EMs), even more so for the latter. Financial inclusion facilitates consumption smoothing, lowers income inequality, enables risk diversification, and tends to positively affect economic growth. Financial stability is another rising priority among policy makers. This is evident in the re-emergence of macroprudential policies after the global financial crisis, minimizing systemic risk, particularly risks associated with rapid credit growth. However, there are significant policy tradeoffs that could exist between both financial inclusion and financial stability, with mixed evidence on the link between the two objectives. Given the importance of macroprudential policies as a toolbox to achieve financial stability, we examine the impact of macroprudential policies on financial inclusion - a potential cause for financial instability if not carefully implemented. Using panel regressions for 67 countries over the period 2000-2014, our results point to mixed effects of macroprudential policies. The usage (and tightening) of some tools, such as the debt-to-income ratio, appear to reduce financial inclusion whereas others, such as the required reserve ratio (RRR), increase it. Specifically, both institutional quality and financial development appear to increase the effectiveness of macroprudential policies on financial inclusion. Institutional quality helps macroprudential policies boost financial inclusion, with mixed effects as a result of financial development, but the results are more significant when we include either institutional quality or financial development. This leads us to believe that macroprudential policies conditional on better institutional quality and financial development improves financial inclusion. This has important policy implications for financial stability

    Adaptive Temporal Compressive Sensing for Video

    Full text link
    This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to real-time implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems.Comment: IEEE Interonal International Conference on Image Processing (ICIP),201

    Unsupervised 3D Pose Estimation with Geometric Self-Supervision

    Full text link
    We present an unsupervised learning approach to recover 3D human pose from 2D skeletal joints extracted from a single image. Our method does not require any multi-view image data, 3D skeletons, correspondences between 2D-3D points, or use previously learned 3D priors during training. A lifting network accepts 2D landmarks as inputs and generates a corresponding 3D skeleton estimate. During training, the recovered 3D skeleton is reprojected on random camera viewpoints to generate new "synthetic" 2D poses. By lifting the synthetic 2D poses back to 3D and re-projecting them in the original camera view, we can define self-consistency loss both in 3D and in 2D. The training can thus be self supervised by exploiting the geometric self-consistency of the lift-reproject-lift process. We show that self-consistency alone is not sufficient to generate realistic skeletons, however adding a 2D pose discriminator enables the lifter to output valid 3D poses. Additionally, to learn from 2D poses "in the wild", we train an unsupervised 2D domain adapter network to allow for an expansion of 2D data. This improves results and demonstrates the usefulness of 2D pose data for unsupervised 3D lifting. Results on Human3.6M dataset for 3D human pose estimation demonstrate that our approach improves upon the previous unsupervised methods by 30% and outperforms many weakly supervised approaches that explicitly use 3D data

    Return of the features. Efficient feature selection and interpretation for photometric redshifts

    Get PDF
    The explosion of data in recent years has generated an increasing need for new analysis techniques in order to extract knowledge from massive datasets. Machine learning has proved particularly useful to perform this task. Fully automatized methods have recently gathered great popularity, even though those methods often lack physical interpretability. In contrast, feature based approaches can provide both well-performing models and understandable causalities with respect to the correlations found between features and physical processes. Efficient feature selection is an essential tool to boost the performance of machine learning models. In this work, we propose a forward selection method in order to compute, evaluate, and characterize better performing features for regression and classification problems. Given the importance of photometric redshift estimation, we adopt it as our case study. We synthetically created 4,520 features by combining magnitudes, errors, radii, and ellipticities of quasars, taken from the SDSS. We apply a forward selection process, a recursive method in which a huge number of feature sets is tested through a kNN algorithm, leading to a tree of feature sets. The branches of the tree are then used to perform experiments with the random forest, in order to validate the best set with an alternative model. We demonstrate that the sets of features determined with our approach improve the performances of the regression models significantly when compared to the performance of the classic features from the literature. The found features are unexpected and surprising, being very different from the classic features. Therefore, a method to interpret some of the found features in a physical context is presented. The methodology described here is very general and can be used to improve the performance of machine learning models for any regression or classification task.Comment: 21 pages, 11 figures, accepted for publication on A&A, final version after language revisio
    • …
    corecore