354 research outputs found

    The Macroeconomics of the Declining U.S. Labor Share: a Debt-led Explanation

    Get PDF
    This paper aims to answer two major conundrums in macroeconomic theory with regards to the U.S. economy. First, standard macroeconomic models such as Harrod-Domar and Solow theoryze that factor shares are constant; however, actual measures of the U.S. labor share have been on a downward trend since the early 1980s. The second conundrum relates to the Post-Kaleckian wage-led or profit-led view of economic growth. It indicates that a fall in the labor share in a wage-led economy will result in a fall in aggregate demand (due to deceases in consumption), and an increase in aggregate demand in a profit-led economy (due to increases in investment). However, the consumption share of GDP in the U.S. has been increasing and the investment share has been stable in spite of the falling labor share. We argue that the resolution of these conundrums involves reexamining the standard Keynesian consumption function, both theoretically and empirically. Thus, we propose an original theory of consumption based on the principles of Duesenberry\u27s (1949) Relative Income Hypothesis. We find that the economic consequence of a falling labor share in the United States is that aggregate demand growth, despite remaining wage-led, has become increasingly dependent on the accumulation of household debt. Furthermore, we conclude that there are four ominous outcomes associated with this dependence on household debt: unstable growth, sluggish growth, stagnation and economic contraction

    General qq-series transformations based on Abel's lemma on summation by parts and their applications

    Full text link
    In this paper, we establish three new and general transformations with sixteen parameters and bases via Abel's lemma on summation by parts. As applications, we set up a lot of new transformations of basic hypergeometric series. Among include some new results of Gasper and Rahman's quadratic, cubic, and quartic transformations. Furthermore, we put forward the so-called (R,S)(R,S)-type transformation with arbitrary degree to unify such multibasic transformations. Some special (R,S)(R,S)-type transformations are presented

    Perceptual Generative Adversarial Networks for Small Object Detection

    Full text link
    Detecting small objects is notoriously challenging due to their low resolution and noisy representation. Existing object detection pipelines usually detect small objects through learning representations of all the objects at multiple scales. However, the performance gain of such ad hoc architectures is usually limited to pay off the computational cost. In this work, we address the small object detection problem by developing a single architecture that internally lifts representations of small objects to "super-resolved" ones, achieving similar characteristics as large objects and thus more discriminative for detection. For this purpose, we propose a new Perceptual Generative Adversarial Network (Perceptual GAN) model that improves small object detection through narrowing representation difference of small objects from the large ones. Specifically, its generator learns to transfer perceived poor representations of the small objects to super-resolved ones that are similar enough to real large objects to fool a competing discriminator. Meanwhile its discriminator competes with the generator to identify the generated representation and imposes an additional perceptual requirement - generated representations of small objects must be beneficial for detection purpose - on the generator. Extensive evaluations on the challenging Tsinghua-Tencent 100K and the Caltech benchmark well demonstrate the superiority of Perceptual GAN in detecting small objects, including traffic signs and pedestrians, over well-established state-of-the-arts

    FusionRCNN: LiDAR-Camera Fusion for Two-stage 3D Object Detection

    Full text link
    3D object detection with multi-sensors is essential for an accurate and reliable perception system of autonomous driving and robotics. Existing 3D detectors significantly improve the accuracy by adopting a two-stage paradigm which merely relies on LiDAR point clouds for 3D proposal refinement. Though impressive, the sparsity of point clouds, especially for the points far away, making it difficult for the LiDAR-only refinement module to accurately recognize and locate objects.To address this problem, we propose a novel multi-modality two-stage approach named FusionRCNN, which effectively and efficiently fuses point clouds and camera images in the Regions of Interest(RoI). FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from camera in a unified attention mechanism. Specifically, it first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step; then leverages an intra-modality self-attention to enhance the domain-specific features, following by a well-designed cross-attention to fuse the information from two modalities.FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors.Remarkably, FusionRCNN significantly improves the strong SECOND baseline by 6.14% mAP on Waymo, and outperforms competing two-stage approaches. Code will be released soon at https://github.com/xxlbigbrother/Fusion-RCNN.Comment: 7 pages, 3 figure

    Inhibition of SPRY2 expression protects sevoflurane-induced nerve injury via ERK signaling pathway

    Get PDF
    Purpose: To investigate the effect of Sprouty2 (SPRY2) on sevoflurane (SEV) induced nerve injury in rats and its potential signaling pathway. Methods: Male Sprague-Dawley rats were divided into sham and SEV groups containing six rats per group. Neurological injury assessment and H & E staining were performed to evaluate the degree of nerve injury in the rats, while quantitative polymerase chain reaction (qPCR) and immunoblot assays were performed to confirm the expression levels of SPRY2 in hippocampus tissues. Morris water maze tests were performed to determine the degree of cognitive deficit in rats. TUNEL and immunoblot assays were performed to evaluate the effects of SPRY2 on the apoptosis of hippocampus tissues. Results: The SPRY2 expression was elevated in sevoflurane-induced hippocampus injury (p < 0.001). Ablation of SPRY2 inhibited sevoflurane-induced hippocampal neuron apoptosis (p < 0.001). In addition, depletion of SPRY2 promoted hippocampal neuron activity and decreased apoptosis (p < 0.001). Knockdown of SPRY2 promoted ERK signaling pathway, thereby protecting against sevoflurane-induced nerve injury and cognitive deficit in the rats (p < 0.001). Conclusion: Sevoflurane induces cognitive dysfunction and upregulates SPRY2 expression in brain tissues in rats. The SPRY2 knockdown improves SEV-induced neural injuries and cognitive deficits, inhibits hippocampal neuron apoptosis, and enhances its activity. Meanwhile, SPRY2 depletion protects SEV-induced nerve injury via the ERK pathway. Thus, Sprouty2 could serve as a promising drug target for the treatment of SEV-induced cognitive dysfunctions

    Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes

    Full text link
    Humans have long been recorded in a variety of forms since antiquity. For example, sculptures and paintings were the primary media for depicting human beings before the invention of cameras. However, most current human-centric computer vision tasks like human pose estimation and human image generation focus exclusively on natural images in the real world. Artificial humans, such as those in sculptures, paintings, and cartoons, are commonly neglected, making existing models fail in these scenarios. As an abstraction of life, art incorporates humans in both natural and artificial scenes. We take advantage of it and introduce the Human-Art dataset to bridge related tasks in natural and artificial scenarios. Specifically, Human-Art contains 50k high-quality images with over 123k person instances from 5 natural and 15 artificial scenarios, which are annotated with bounding boxes, keypoints, self-contact points, and text information for humans represented in both 2D and 3D. It is, therefore, comprehensive and versatile for various downstream tasks. We also provide a rich set of baseline results and detailed analyses for related tasks, including human detection, 2D and 3D human pose estimation, image generation, and motion transfer. As a challenging dataset, we hope Human-Art can provide insights for relevant research and open up new research questions.Comment: CVPR202
    • …
    corecore