140 research outputs found
Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive Manufacturing Model with Operator Learning
One predominant challenge in additive manufacturing (AM) is to achieve
specific material properties by manipulating manufacturing process parameters
during the runtime. Such manipulation tends to increase the computational load
imposed on existing simulation tools employed in AM. The goal of the present
work is to construct a fast and accurate reduced-order model (ROM) for an AM
model developed within the Multiphysics Object-Oriented Simulation Environment
(MOOSE) framework, ultimately reducing the time/cost of AM control and
optimization processes. Our adoption of the operator learning (OL) approach
enabled us to learn a family of differential equations produced by altering
process variables in the laser's Gaussian point heat source. More specifically,
we used the Fourier neural operator (FNO) and deep operator network (DeepONet)
to develop ROMs for time-dependent responses. Furthermore, we benchmarked the
performance of these OL methods against a conventional deep neural network
(DNN)-based ROM. Ultimately, we found that OL methods offer comparable
performance and, in terms of accuracy and generalizability, even outperform DNN
at predicting scalar model responses. The DNN-based ROM afforded the fastest
training time. Furthermore, all the ROMs were faster than the original MOOSE
model yet still provided accurate predictions. FNO had a smaller mean
prediction error than DeepONet, with a larger variance for time-dependent
responses. Unlike DNN, both FNO and DeepONet were able to simulate time series
data without the need for dimensionality reduction techniques. The present work
can help facilitate the AM optimization process by enabling faster execution of
simulation tools while still preserving evaluation accuracy.Comment: 28 pages, 18 figures, 4 table
RayMVSNet++: Learning Ray-based 1D Implicit Fields for Accurate Multi-View Stereo
Learning-based multi-view stereo (MVS) has by far centered around 3D
convolution on cost volumes. Due to the high computation and memory consumption
of 3D CNN, the resolution of output depth is often considerably limited.
Different from most existing works dedicated to adaptive refinement of cost
volumes, we opt to directly optimize the depth value along each camera ray,
mimicking the range finding of a laser scanner. This reduces the MVS problem to
ray-based depth optimization which is much more light-weight than full cost
volume optimization. In particular, we propose RayMVSNet which learns
sequential prediction of a 1D implicit field along each camera ray with the
zero-crossing point indicating scene depth. This sequential modeling, conducted
based on transformer features, essentially learns the epipolar line search in
traditional multi-view stereo. We devise a multi-task learning for better
optimization convergence and depth accuracy. We found the monotonicity property
of the SDFs along each ray greatly benefits the depth estimation. Our method
ranks top on both the DTU and the Tanks & Temples datasets over all previous
learning-based methods, achieving an overall reconstruction score of 0.33mm on
DTU and an F-score of 59.48% on Tanks & Temples. It is able to produce
high-quality depth estimation and point cloud reconstruction in challenging
scenarios such as objects/scenes with non-textured surface, severe occlusion,
and highly varying depth range. Further, we propose RayMVSNet++ to enhance
contextual feature aggregation for each ray through designing an attentional
gating unit to select semantically relevant neighboring rays within the local
frustum around that ray. RayMVSNet++ achieves state-of-the-art performance on
the ScanNet dataset. In particular, it attains an AbsRel of 0.058m and produces
accurate results on the two subsets of textureless regions and large depth
variation.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence. arXiv
admin note: substantial text overlap with arXiv:2204.0132
Connectivity Contribution to Urban Hub Network Based on Super Network Theory – Case Study of Beijing
With the rapid development of urbanization in China, the number of travel modes and urban passenger transportation hubs has been increasing, gradually forming multi-level and multi-attribute transport hub networks in the cities. At the same time, Super Network Theory (SNT) has advantages in displaying the multi-layer transport hubs. The aim of this paper is to provide a new perspective to study connectivity contribution of potential hubs. Urban transport hubs are ranked through topological features based on Hub Super Network (HSN). This paper proposes two indexes based on Super-Edge (SE), Zero Hub Degree of SE (ZHDoSE) and a number of shared SEes (NSSE), respectively. Then, a case study was conducted in Beijing, which considers four combinations to study the influence of transport modes and subway lines on connectivity. The results show that no-normalization strengthens the contribution of transport modes and subway lines on connectivity. Besides, the transport mode contributes a lot to the connectivity. However, elements normalization strengthens the subway lines under ZHDoSE reciprocal. In addition, various weights of ZHDoSE and NSSE have different influences on the recognition results of SEes in HSN
POWER SAVING METHOD FOR PLUGGABLE OPTICAL TRANSCEIVERS
Today, when optical transceivers are plugged into ports of network equipment and a user administratively shuts down a specific port, the network equipment does not remove the power from the optical transceivers and, instead, only the laser is shut down. When the user shuts the individual port with optical transceiver, the power consumption of the optical transceiver will be marginally reduced, however significant power is still consumed. Techniques described herein provide for switching products that are always ready, not always on. Techniques described herein provide for ports that are always ready to be powered up but are powered down when not in use to save power
Additional Positive Enables Better Representation Learning for Medical Images
This paper presents a new way to identify additional positive pairs for BYOL,
a state-of-the-art (SOTA) self-supervised learning framework, to improve its
representation learning ability. Unlike conventional BYOL which relies on only
one positive pair generated by two augmented views of the same image, we argue
that information from different images with the same label can bring more
diversity and variations to the target features, thus benefiting representation
learning. To identify such pairs without any label, we investigate TracIn, an
instance-based and computationally efficient influence function, for BYOL
training. Specifically, TracIn is a gradient-based method that reveals the
impact of a training sample on a test sample in supervised learning. We extend
it to the self-supervised learning setting and propose an efficient batch-wise
per-sample gradient computation method to estimate the pairwise TracIn to
represent the similarity of samples in the mini-batch during training. For each
image, we select the most similar sample from other images as the additional
positive and pull their features together with BYOL loss. Experimental results
on two public medical datasets (i.e., ISIC 2019 and ChestX-ray) demonstrate
that the proposed method can improve the classification performance compared to
other competitive baselines in both semi-supervised and transfer learning
settings.Comment: 8 page
Swainsonine Activates Mitochondria-mediated Apoptotic Pathway in Human Lung Cancer A549 Cells and Retards the Growth of Lung Cancer Xenografts
Swainsonine (1, 2, 8-trihyroxyindolizidine, SW), a natural alkaloid, has been reported to exhibit anti-cancer activity on several mouse models of human cancer and human cancers in vivo. However, the mechanisms of SW-mediated tumor regression are not clear. In this study, we investigated the effects of SW on several human lung cancer cell lines in vitro. The results showed that SW significantly inhibited these cells growth through induction of apoptosis in different extent in vitro. Further studies showed that SW treatment up-regulated Bax, down-regulated Bcl-2 expression, promoted Bax translocation to mitochondria, activated mitochondria-mediated apoptotic pathway, which in turn caused the release of cytochrome c, the activation of caspase-9 and caspase-3, and the cleavage of poly (ADP-ribose) polymerase (PARP), resulting in A549 cell apoptosis. However, the expression of Fas, Fas ligand (FasL) or caspase-8 activity did not appear significant changes in the process of SW-induced apoptosis. Moreover, SW treatment inhibited Bcl-2 expression, promoted Bax translocation, cytochrome c release and caspase-3 activity in xenograft tumor cells, resulting in a significant decrease of tumor volume and tumor weight in the SW-treated xenograft mice groups in comparison to the control group. Taken together, this study demonstrated for the first time that SW inhibited A549 cancer cells growth through a mitochondria-mediated, caspase-dependent apoptotic pathway in vitro and in vivo
Learning to Skip for Language Modeling
Overparameterized large-scale language models have impressive generalization
performance of in-context few-shot learning. However, most language models
allocate the same amount of parameters or computation to each token,
disregarding the complexity or importance of the input data. We argue that in
language model pretraining, a variable amount of computation should be assigned
to different tokens, and this can be efficiently achieved via a simple routing
mechanism. Different from conventional early stopping techniques where tokens
can early exit at only early layers, we propose a more general method that
dynamically skips the execution of a layer (or module) for any input token with
a binary router. In our extensive evaluation across 24 NLP tasks, we
demonstrate that the proposed method can significantly improve the 1-shot
performance compared to other competitive baselines only at mild extra cost for
inference
INTERCONNECT STRUCTURE TO IMPROVE CHIP SIGNAL INTEGRITY AND MECHANICAL RELIABILITY
Techniques are presented herein that support a novel chip interconnect structure, encompassing convex- and concave-shaped copper joint pillars, for connecting a chip (that follows the Optical Internetworking Forum (OIF) next generation (NG) common electrical input/output (CEI)-224 gigabit per second (G) framework) to a printed circuit board (PCB). Aspects of the presented techniques provide excellent signal integrity (SI) performance (including return loss, insertion loss, and impedance discontinuity) in support of, for example, a 102.4 terabit (T) per second switch comprising, among other things, an application-specific integrated circuit (ASIC) having 512 lanes of 224G Serializer/Deserializer (SerDes) capacity. Under further aspects of the techniques, mechanical performance and long-term reliability are significantly improved
GeoTransformer: Fast and Robust Point Cloud Registration with Geometric Transformer
We study the problem of extracting accurate correspondences for point cloud
registration. Recent keypoint-free methods have shown great potential through
bypassing the detection of repeatable keypoints which is difficult to do
especially in low-overlap scenarios. They seek correspondences over downsampled
superpoints, which are then propagated to dense points. Superpoints are matched
based on whether their neighboring patches overlap. Such sparse and loose
matching requires contextual features capturing the geometric structure of the
point clouds. We propose Geometric Transformer, or GeoTransformer for short, to
learn geometric feature for robust superpoint matching. It encodes pair-wise
distances and triplet-wise angles, making it invariant to rigid transformation
and robust in low-overlap cases. The simplistic design attains surprisingly
high matching accuracy such that no RANSAC is required in the estimation of
alignment transformation, leading to times acceleration. Extensive
experiments on rich benchmarks encompassing indoor, outdoor, synthetic,
multiway and non-rigid demonstrate the efficacy of GeoTransformer. Notably, our
method improves the inlier ratio by percentage points and the
registration recall by over points on the challenging 3DLoMatch benchmark.
Our code and models are available at
\url{https://github.com/qinzheng93/GeoTransformer}.Comment: Accepted by TPAMI. Extended version of our CVPR 2022 paper
[arXiv:2202.06688
- …