249 research outputs found
The Establishment and Development of the Customs Information Technology Data Platform
In recent years, although China’s foreign trade is developed rapidly, Customs operations still face challenges. This paper is based on the Nolan model, Synnott model and Mische model and depended on innovative technology of RFID and ASN to study. In addition, the writers suggest that Customs should establish an information technology data platform to address the plight of the current customs management and to propose policies and applications for law-enforcing department, finance department, tax department, the legislative and related industries
Anti-monopoly supervision model of platform economy based on big data and sentiment
With the advent of the cloud computing era, big data technology has also developed rapidly. Due to the huge volume, variety, fast processing speed and low value density of big data, traditional data storage, extraction, transformation and analysis technologies are not suitable, so new solutions for big data application technologies are needed. However, with the development of economic theory and the practice of market economy, some links in the industrial chain of natural monopoly industries already have a certain degree of competitiveness. In this context, the article conducts a research on the anti-monopoly supervision mode of platform economy based on big data and sentiment analysis. This paper introduces the main idea of MapReduce, the current software implementation specifies a Map function that maps a set of key-value pairs into a new set of key-value pairs. It specifies a concurrent Reduce function that guarantees that each of all mapped key-value pairs share the same set of keys. establishes a vector space model, and basically realizes the extraction of text emotional elements. It introduces the theoretical controversy of antitrust regulation of predatory pricing behavior of third-party payment platforms, and conducted model experiments. The experimental results show that the throughput of 40 test users in 1 h of test is determined by two factors, QPS and the number of concurrent, where QPS = 40/(60*60) transactions/second. The time for each test user to log in to the system is 10 min, and the average response time is 10*60 s, then the number of concurrency = QPS*average response time = 40/(60*60)*10*60 = 6.66. This paper has successfully completed the research on the anti-monopoly supervision model of platform economy based on big data and sentiment analysis
Testing Closeness of Multivariate Distributions via Ramsey Theory
We investigate the statistical task of closeness (or equivalence) testing for
multidimensional distributions. Specifically, given sample access to two
unknown distributions on , we want to
distinguish between the case that versus , where denotes
the generalized distance between and --
measuring the maximum discrepancy between the distributions over any collection
of disjoint, axis-aligned rectangles. Our main result is the first
closeness tester for this problem with {\em sub-learning} sample complexity in
any fixed dimension and a nearly-matching sample complexity lower bound.
In more detail, we provide a computationally efficient closeness tester with
sample complexity . On the lower bound side, we establish a qualitatively
matching sample complexity lower bound of
, even for . These sample
complexity bounds are surprising because the sample complexity of the problem
in the univariate setting is . This
has the interesting consequence that the jump from one to two dimensions leads
to a substantial increase in sample complexity, while increases beyond that do
not.
As a corollary of our general tester, we obtain -closeness testers for pairs of -histograms on over a
common unknown partition, and pairs of uniform distributions supported on the
union of unknown disjoint axis-aligned rectangles.
Both our algorithm and our lower bound make essential use of tools from
Ramsey theory
Sounding Video Generator: A Unified Framework for Text-guided Sounding Video Generation
As a combination of visual and audio signals, video is inherently
multi-modal. However, existing video generation methods are primarily intended
for the synthesis of visual frames, whereas audio signals in realistic videos
are disregarded. In this work, we concentrate on a rarely investigated problem
of text guided sounding video generation and propose the Sounding Video
Generator (SVG), a unified framework for generating realistic videos along with
audio signals. Specifically, we present the SVG-VQGAN to transform visual
frames and audio melspectrograms into discrete tokens. SVG-VQGAN applies a
novel hybrid contrastive learning method to model inter-modal and intra-modal
consistency and improve the quantized representations. A cross-modal attention
module is employed to extract associated features of visual frames and audio
signals for contrastive learning. Then, a Transformer-based decoder is used to
model associations between texts, visual frames, and audio signals at token
level for auto-regressive sounding video generation. AudioSetCap, a human
annotated text-video-audio paired dataset, is produced for training SVG.
Experimental results demonstrate the superiority of our method when compared
with existing textto-video generation methods as well as audio generation
methods on Kinetics and VAS datasets
Online Robust Mean Estimation
We study the problem of high-dimensional robust mean estimation in an online
setting. Specifically, we consider a scenario where sensors are measuring
some common, ongoing phenomenon. At each time step , the
sensor reports its readings for that time step. The
algorithm must then commit to its estimate for the true mean value of
the process at time . We assume that most of the sensors observe independent
samples from some common distribution , but an -fraction of them
may instead behave maliciously. The algorithm wishes to compute a good
approximation to the true mean . We note that
if the algorithm is allowed to wait until time to report its estimate, this
reduces to the well-studied problem of robust mean estimation. However, the
requirement that our algorithm produces partial estimates as the data is coming
in substantially complicates the situation.
We prove two main results about online robust mean estimation in this model.
First, if the uncorrupted samples satisfy the standard condition of
-stability, we give an efficient online algorithm that
outputs estimates , such that with high probability it
holds that , where . We note that this error bound is nearly competitive with the best
offline algorithms, which would achieve -error of . Our
second main result shows that with additional assumptions on the input (most
notably that is a product distribution) there are inefficient algorithms
whose error does not depend on at all.Comment: To appear in SODA202
Building a Cultural and Creative Industry Platform to Serve the Economic Development
With the rise of global integration of science, technology economy and cultural creative industries develop rapidly. Under the circumstance of rapid development, how to train the development of cultural creative industries talents has become a key factor problem of prosperity society. Institutions of higher learning undertake the four functions of talent training, scientific research, social service cultural inheritance and innovation. Therefore, it is necessary to build a research platform for cultural and creative industry of the college. This platform is not only help graduates find their future employment direction, but also effectively help them to obtain employment and start businesses. At the same time, the platform is used to enhance the integration with local industry development and promote the local economy development, which not only meets the development of college, but also meets the needs of local governments and enterprises. This mode of training talents for government-industry-university-research cooperation meets the interest demands of the government, industry and school, and serves the development of local economy together.
Keywords: Cultural Creative Product; Talent Cultivation; Local Economic; Economic Development.
eISSN: 2398-4287 © 2022. The Authors. Published for AMER ABRA cE-Bs by e-International Publishing House, Ltd., UK. This is an open-access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer–review under the responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians), and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia.
DOI: https://doi.org/10.21834/ebpj.v7iSI7.377
Diving into Darkness: A Dual-Modulated Framework for High-Fidelity Super-Resolution in Ultra-Dark Environments
Super-resolution tasks oriented to images captured in ultra-dark environments
is a practical yet challenging problem that has received little attention. Due
to uneven illumination and low signal-to-noise ratio in dark environments, a
multitude of problems such as lack of detail and color distortion may be
magnified in the super-resolution process compared to normal-lighting
environments. Consequently, conventional low-light enhancement or
super-resolution methods, whether applied individually or in a cascaded manner
for such problem, often encounter limitations in recovering luminance, color
fidelity, and intricate details. To conquer these issues, this paper proposes a
specialized dual-modulated learning framework that, for the first time,
attempts to deeply dissect the nature of the low-light super-resolution task.
Leveraging natural image color characteristics, we introduce a self-regularized
luminance constraint as a prior for addressing uneven lighting. Expanding on
this, we develop Illuminance-Semantic Dual Modulation (ISDM) components to
enhance feature-level preservation of illumination and color details. Besides,
instead of deploying naive up-sampling strategies, we design the
Resolution-Sensitive Merging Up-sampler (RSMU) module that brings together
different sampling modalities as substrates, effectively mitigating the
presence of artifacts and halos. Comprehensive experiments showcases the
applicability and generalizability of our approach to diverse and challenging
ultra-low-light conditions, outperforming state-of-the-art methods with a
notable improvement (i.e., 5\% in PSNR, and 43\% in LPIPS).
Especially noteworthy is the 19-fold increase in the RMSE score, underscoring
our method's exceptional generalization across different darkness levels. The
code will be available online upon publication of the paper.Comment: 9 page
- …