14,661 research outputs found
Controlling the Intrinsic Josephson Junction Number in a Mesa
In fabricating intrinsic Josephson
junctions in 4-terminal mesa structures, we modify the conventional fabrication
process by markedly reducing the etching rates of argon ion milling. As a
result, the junction number in a stack can be controlled quite satisfactorily
as long as we carefully adjust those factors such as the etching time and the
thickness of the evaporated layers. The error in the junction number is within
. By additional ion etching if necessary, we can controllably decrease
the junction number to a rather small value, and even a single intrinsic
Josephson junction can be produced.Comment: to bu published in Jpn. J. Appl. Phys., 43(7A) 200
Confirming the 115.5-day periodicity in the X-ray light curve of ULX NGC 5408 X-1
The Swift/XRT light curve of the ultraluminous X-ray (ULX) source NGC 5408
X-1 was re-analyzed with two new numerical approaches, Weighted Wavelet
-transform (WWZ) and CLEANest, that are different from previous studies.
Both techniques detected a prominent periodicity with a time scale of
days, in excellent agreement with the detection of the same
periodicity first reported by Strohmayer (2009). Monte Carlo simulation was
employed to test the statisiticak confidence of the 115.5-day periodicity,
yielding a statistical significance of (or ). The robust
detection of the 115.5-day quasi-periodic oscillations (QPOs), if it is due to
the orbital motion of the binary, would infer a mass of a few thousand
for the central black hole, implying an intermediate-mass black hole
in NGC 5408 X-1.Comment: 6 pages, 2 figures, submitted to Research in Astronomy and
Astrophysics (RAA
Secure and Effective Data Appraisal for Machine Learning
Essential for an unfettered data market is the ability to discreetly select
and evaluate training data before finalizing a transaction between the data
owner and model owner. To safeguard the privacy of both data and model, this
process involves scrutinizing the target model through Multi-Party Computation
(MPC). While prior research has posited that the MPC-based evaluation of
Transformer models is excessively resource-intensive, this paper introduces an
innovative approach that renders data selection practical. The contributions of
this study encompass three pivotal elements: (1) a groundbreaking pipeline for
confidential data selection using MPC, (2) replicating intricate
high-dimensional operations with simplified low-dimensional MLPs trained on a
limited subset of pertinent data, and (3) implementing MPC in a concurrent,
multi-phase manner. The proposed method is assessed across an array of
Transformer models and NLP/CV benchmarks. In comparison to the direct MPC-based
evaluation of the target model, our approach substantially reduces the time
required, from thousands of hours to mere tens of hours, with only a nominal
0.20% dip in accuracy when training with the selected data
SiMaN: Sign-to-Magnitude Network Binarization
Binary neural networks (BNNs) have attracted broad research interest due to
their efficient storage and computational ability. Nevertheless, a significant
challenge of BNNs lies in handling discrete constraints while ensuring bit
entropy maximization, which typically makes their weight optimization very
difficult. Existing methods relax the learning using the sign function, which
simply encodes positive weights into +1s, and -1s otherwise. Alternatively, we
formulate an angle alignment objective to constrain the weight binarization to
{0,+1} to solve the challenge. In this paper, we show that our weight
binarization provides an analytical solution by encoding high-magnitude weights
into +1s, and 0s otherwise. Therefore, a high-quality discrete solution is
established in a computationally efficient manner without the sign function. We
prove that the learned weights of binarized networks roughly follow a Laplacian
distribution that does not allow entropy maximization, and further demonstrate
that it can be effectively solved by simply removing the
regularization during network training. Our method, dubbed sign-to-magnitude
network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet,
demonstrating its superiority over the sign-based state-of-the-arts. Our source
code, experimental settings, training logs and binary models are available at
https://github.com/lmbxmu/SiMaN
GA-Par: Dependable Microservice Orchestration Framework for Geo-Distributed Clouds
Recent advances in composing Cloud applications have been driven by deployments of inter-networking heterogeneous microservices across multiple Cloud datacenters. System dependability has been of the upmost importance and criticality to both service vendors and customers. Security, a measurable attribute, is increasingly regarded as the representative example of dependability. Literally, with the increment of microservice types and dynamicity, applications are exposed to aggravated internal security threats and externally environmental uncertainties. Existing work mainly focuses on the QoS-aware composition of native VM-based Cloud application components, while ignoring uncertainties and security risks among interactive and interdependent container-based microservices. Still, orchestrating a set of microservices across datacenters under those constraints remains computationally intractable. This paper describes a new dependable microservice orchestration framework GA-Par to effectively select and deploy microservices whilst reducing the discrepancy between user security requirements and actual service provision. We adopt a hybrid (both whitebox and blackbox based) approach to measure the satisfaction of security requirement and the environmental impact of network QoS on system dependability. Due to the exponential grow of solution space, we develop a parallel Genetic Algorithm framework based on Spark to accelerate the operations for calculating the optimal or near-optimal solution. Large-scale real world datasets are utilized to validate models and orchestration approach. Experiments show that our solution outperforms the greedy-based security aware method with 42.34 percent improvement. GA-Par is roughly 4× faster than a Hadoop-based genetic algorithm solver and the effectiveness can be constantly guaranteed under different application scales
- …