236 research outputs found

    Anti-monopoly supervision model of platform economy based on big data and sentiment

    Get PDF
    With the advent of the cloud computing era, big data technology has also developed rapidly. Due to the huge volume, variety, fast processing speed and low value density of big data, traditional data storage, extraction, transformation and analysis technologies are not suitable, so new solutions for big data application technologies are needed. However, with the development of economic theory and the practice of market economy, some links in the industrial chain of natural monopoly industries already have a certain degree of competitiveness. In this context, the article conducts a research on the anti-monopoly supervision mode of platform economy based on big data and sentiment analysis. This paper introduces the main idea of MapReduce, the current software implementation specifies a Map function that maps a set of key-value pairs into a new set of key-value pairs. It specifies a concurrent Reduce function that guarantees that each of all mapped key-value pairs share the same set of keys. establishes a vector space model, and basically realizes the extraction of text emotional elements. It introduces the theoretical controversy of antitrust regulation of predatory pricing behavior of third-party payment platforms, and conducted model experiments. The experimental results show that the throughput of 40 test users in 1 h of test is determined by two factors, QPS and the number of concurrent, where QPS = 40/(60*60) transactions/second. The time for each test user to log in to the system is 10 min, and the average response time is 10*60 s, then the number of concurrency = QPS*average response time = 40/(60*60)*10*60 = 6.66. This paper has successfully completed the research on the anti-monopoly supervision model of platform economy based on big data and sentiment analysis

    Testing Closeness of Multivariate Distributions via Ramsey Theory

    Full text link
    We investigate the statistical task of closeness (or equivalence) testing for multidimensional distributions. Specifically, given sample access to two unknown distributions p,q\mathbf p, \mathbf q on Rd\mathbb R^d, we want to distinguish between the case that p=q\mathbf p=\mathbf q versus pqAk>ϵ\|\mathbf p-\mathbf q\|_{A_k} > \epsilon, where pqAk\|\mathbf p-\mathbf q\|_{A_k} denotes the generalized Ak{A}_k distance between p\mathbf p and q\mathbf q -- measuring the maximum discrepancy between the distributions over any collection of kk disjoint, axis-aligned rectangles. Our main result is the first closeness tester for this problem with {\em sub-learning} sample complexity in any fixed dimension and a nearly-matching sample complexity lower bound. In more detail, we provide a computationally efficient closeness tester with sample complexity O((k6/7/polyd(ϵ))logd(k))O\left((k^{6/7}/ \mathrm{poly}_d(\epsilon)) \log^d(k)\right). On the lower bound side, we establish a qualitatively matching sample complexity lower bound of Ω(k6/7/poly(ϵ))\Omega(k^{6/7}/\mathrm{poly}(\epsilon)), even for d=2d=2. These sample complexity bounds are surprising because the sample complexity of the problem in the univariate setting is Θ(k4/5/poly(ϵ))\Theta(k^{4/5}/\mathrm{poly}(\epsilon)). This has the interesting consequence that the jump from one to two dimensions leads to a substantial increase in sample complexity, while increases beyond that do not. As a corollary of our general AkA_k tester, we obtain dTVd_{\mathrm TV}-closeness testers for pairs of kk-histograms on Rd\mathbb R^d over a common unknown partition, and pairs of uniform distributions supported on the union of kk unknown disjoint axis-aligned rectangles. Both our algorithm and our lower bound make essential use of tools from Ramsey theory

    Sounding Video Generator: A Unified Framework for Text-guided Sounding Video Generation

    Full text link
    As a combination of visual and audio signals, video is inherently multi-modal. However, existing video generation methods are primarily intended for the synthesis of visual frames, whereas audio signals in realistic videos are disregarded. In this work, we concentrate on a rarely investigated problem of text guided sounding video generation and propose the Sounding Video Generator (SVG), a unified framework for generating realistic videos along with audio signals. Specifically, we present the SVG-VQGAN to transform visual frames and audio melspectrograms into discrete tokens. SVG-VQGAN applies a novel hybrid contrastive learning method to model inter-modal and intra-modal consistency and improve the quantized representations. A cross-modal attention module is employed to extract associated features of visual frames and audio signals for contrastive learning. Then, a Transformer-based decoder is used to model associations between texts, visual frames, and audio signals at token level for auto-regressive sounding video generation. AudioSetCap, a human annotated text-video-audio paired dataset, is produced for training SVG. Experimental results demonstrate the superiority of our method when compared with existing textto-video generation methods as well as audio generation methods on Kinetics and VAS datasets

    Online Robust Mean Estimation

    Full text link
    We study the problem of high-dimensional robust mean estimation in an online setting. Specifically, we consider a scenario where nn sensors are measuring some common, ongoing phenomenon. At each time step t=1,2,,Tt=1,2,\ldots,T, the ithi^{th} sensor reports its readings xt(i)x^{(i)}_t for that time step. The algorithm must then commit to its estimate μt\mu_t for the true mean value of the process at time tt. We assume that most of the sensors observe independent samples from some common distribution XX, but an ϵ\epsilon-fraction of them may instead behave maliciously. The algorithm wishes to compute a good approximation μ\mu to the true mean μ:=E[X]\mu^\ast := \mathbf{E}[X]. We note that if the algorithm is allowed to wait until time TT to report its estimate, this reduces to the well-studied problem of robust mean estimation. However, the requirement that our algorithm produces partial estimates as the data is coming in substantially complicates the situation. We prove two main results about online robust mean estimation in this model. First, if the uncorrupted samples satisfy the standard condition of (ϵ,δ)(\epsilon,\delta)-stability, we give an efficient online algorithm that outputs estimates μt\mu_t, t[T],t \in [T], such that with high probability it holds that μμ2=O(δlog(T))\|\mu-\mu^\ast\|_2 = O(\delta \log(T)), where μ=(μt)t[T]\mu = (\mu_t)_{t \in [T]}. We note that this error bound is nearly competitive with the best offline algorithms, which would achieve 2\ell_2-error of O(δ)O(\delta). Our second main result shows that with additional assumptions on the input (most notably that XX is a product distribution) there are inefficient algorithms whose error does not depend on TT at all.Comment: To appear in SODA202

    Building a Cultural and Creative Industry Platform to Serve the Economic Development

    Get PDF
    With the rise of global integration of science, technology economy and cultural creative industries develop rapidly. Under the circumstance of rapid development, how to train the development of cultural creative industries talents has become a key factor problem of prosperity society. Institutions of higher learning undertake the four functions of talent training, scientific research, social service cultural inheritance and innovation. Therefore, it is necessary to build a research platform for cultural and creative industry of the college. This platform is not only help graduates find their future employment direction, but also effectively help them to obtain employment and start businesses. At the same time, the platform is used to enhance the integration with local industry development and promote the local economy development, which not only meets the development of college, but also meets the needs of local governments and enterprises. This mode of training talents for government-industry-university-research cooperation meets the interest demands of the government, industry and school, and serves the development of local economy together. Keywords: Cultural Creative Product; Talent Cultivation; Local Economic; Economic Development. eISSN: 2398-4287 © 2022. The Authors. Published for AMER ABRA cE-Bs by e-International Publishing House, Ltd., UK. This is an open-access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer–review under the responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians), and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. DOI: https://doi.org/10.21834/ebpj.v7iSI7.377

    Diving into Darkness: A Dual-Modulated Framework for High-Fidelity Super-Resolution in Ultra-Dark Environments

    Full text link
    Super-resolution tasks oriented to images captured in ultra-dark environments is a practical yet challenging problem that has received little attention. Due to uneven illumination and low signal-to-noise ratio in dark environments, a multitude of problems such as lack of detail and color distortion may be magnified in the super-resolution process compared to normal-lighting environments. Consequently, conventional low-light enhancement or super-resolution methods, whether applied individually or in a cascaded manner for such problem, often encounter limitations in recovering luminance, color fidelity, and intricate details. To conquer these issues, this paper proposes a specialized dual-modulated learning framework that, for the first time, attempts to deeply dissect the nature of the low-light super-resolution task. Leveraging natural image color characteristics, we introduce a self-regularized luminance constraint as a prior for addressing uneven lighting. Expanding on this, we develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details. Besides, instead of deploying naive up-sampling strategies, we design the Resolution-Sensitive Merging Up-sampler (RSMU) module that brings together different sampling modalities as substrates, effectively mitigating the presence of artifacts and halos. Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions, outperforming state-of-the-art methods with a notable improvement (i.e., \uparrow5\% in PSNR, and \uparrow43\% in LPIPS). Especially noteworthy is the 19-fold increase in the RMSE score, underscoring our method's exceptional generalization across different darkness levels. The code will be available online upon publication of the paper.Comment: 9 page
    corecore