209 research outputs found
A Novel High-Fidelity Simulation for Finishing Operations: Hybrid Image Mosaic and Wavelet Decomposition.
In finishing simulations, achieving accurate results can be challenging due to the minimal amount of material removal and the limited measurement range of surface micro-topography instruments. To overcome these limitations, a novel high-fidelity modeling method combining image mosaic and wavelet decomposition technologies is proposed in this paper. We achieve the stitching of narrow field and high pixel micro morphology images through four steps: image feature extraction, overlapped feature matching, feature fusion, and stitching effect evaluation. On this basis, the wavelet decomposition method is employed to separate detection signals based on their respective frequencies, allowing the establishment of a datum plane and a roughness surface. The point cloud model undergoes a transformation into a continuous geometric model via the Poisson reconstruction algorithm. In the case study, four sample images of an aluminum alloy sheet after barrel finishing were collected using the ZeGage Plus optical profiler. Each image has an actual size of 834.37 μm × 834.37 μm. Subsequently, a comparison was carried out between the physical and simulation experiments. The results clearly indicate that the proposed method has the potential to enhance the accuracy of the finishing simulation by over 30%. The error between the resulting model and the actual surface of the part can be controlled within 1 μm
Bethe ansatz for an AdS/CFT open spin chain with non-diagonal boundaries
We consider the integrable open-chain transfer matrix corresponding to a Y=0
brane at one boundary, and a Y_theta=0 brane (rotated with the respect to the
former by an angle theta) at the other boundary. We determine the exact
eigenvalues of this transfer matrix in terms of solutions of a corresponding
set of Bethe equations.Comment: 25 pages; v2: reference added; v3: minor revisions, form accepted by
journa
A Study of Wolf Pack Algorithm for Test Suite Reduction
Modern smart meter programs are iterating at an ever-increasing rate, placing higher demands on the software testing of smart meters. How to reduce the cost of software testing has become a focus of current research. The reduction of test overhead is the most intuitive way to reduce the cost of software testing. Test suite reduction is one of the necessary means to reduce test overhead. This paper proposes a smart meter test suite reduction technique based on Wolf Pack Algorithm. First, the algorithm uses the binary optimization set coverage problem to represent the test suite reduction of the smart meter program; then, the Wolf Pack Algorithm is improved by converting the positions of individual wolves into a 0/1 matrix; finally, the optimal test case subset is obtained by iteration. By simulating different smart meter programs and different size test suites, the experimental result shows that the Wolf Pack Algorithm achieves better results compared to similar algorithms in terms of the percentage of obtaining both the optimal solution and the optimal subset of test overhead
Bethe states of the XXZ spin-1/2 chain with arbitrary boundary fields
Based on the inhomogeneous T-Q relation constructed via the off-diagonal
Bethe Ansatz, the Bethe-type eigenstates of the XXZ spin-1/2 chain with
arbitrary boundary fields are constructed. It is found that by employing two
sets of gauge transformations, proper generators and reference state for
constructing Bethe vectors can be obtained respectively. Given an inhomogeneous
T-Q relation for an eigenvalue, it is proven that the resulting Bethe state is
an eigenstate of the transfer matrix, provided that the parameters of the
generators satisfy the associated Bethe Ansatz equations.Comment: 24 pages, no figure, published versio
Scaling Law of Large Sequential Recommendation Models
Scaling of neural networks has recently shown great potential to improve the
model capacity in various fields. Specifically, model performance has a
power-law relationship with model size or data size, which provides important
guidance for the development of large-scale models. However, there is still
limited understanding on the scaling effect of user behavior models in
recommender systems, where the unique data characteristics (e.g. data scarcity
and sparsity) pose new challenges to explore the scaling effect in
recommendation tasks. In this work, we focus on investigating the scaling laws
in large sequential recommendation models. Specially, we consider a pure
ID-based task formulation, where the interaction history of a user is formatted
as a chronological sequence of item IDs. We don't incorporate any side
information (e.g. item text), because we would like to explore how scaling law
holds from the perspective of user behavior. With specially improved
strategies, we scale up the model size to 0.8B parameters, making it feasible
to explore the scaling effect in a diverse range of model sizes. As the major
findings, we empirically show that scaling law still holds for these trained
models, even in data-constrained scenarios. We then fit the curve for scaling
law, and successfully predict the test loss of the two largest tested model
scales. Furthermore, we examine the performance advantage of scaling effect on
five challenging recommendation tasks, considering the unique issues (e.g. cold
start, robustness, long-term preference) in recommender systems. We find that
scaling up the model size can greatly boost the performance on these
challenging tasks, which again verifies the benefits of large recommendation
models
- …