198 research outputs found
TBFV-M : Testing-Based Formal Verification for SysML Activity Diagrams
SysML activity diagrams are often used as models for software systems and its correctness is likely to significantly affect the reliability of the implementation. However, how to effectively verify the correctness of SysML diagrams still remains a challenge. In this paper, we propose a testing-based formal verification (TBFV) approach to the verification of SysML diagrams, called TBFV-M, by creatively applying the existing TBFV approach for code verification. We describe the principle of TBFV-M and present a case study to demonstrate its feasibility and usability. Finally, we conclude the paper and point out future research directions
An iterative data-driven turbulence modeling framework based on Reynolds stress representation
Data-driven turbulence modeling studies have reached such a stage that the
fundamental framework is basically settled, but several essential issues remain
that strongly affect the performance, including accuracy, smoothness, and
generalization capacity. Two problems are studied in the current research: (1)
the processing of the Reynolds stress tensor and (2) the coupling method
between the machine learning turbulence model and CFD solver. The first
determines the form of predicting targets and the resulting physical
completeness and interpretability. The second determines the training process
and intrinsic relevance between the mean flow features and Reynolds stress. For
the Reynolds stress processing issue, we perform the theoretical derivation to
extend the relevant tensor arguments of Reynolds stress in addition to the
strain rate and rotation rate. Then, the tensor representation theorem is
employed to give the complete irreducible invariants and integrity basis. In
addition, an adaptive regularization term is employed to enhance the
representation performance. For the CFD coupling issue, an iterative coupling
data-driven turbulence modeling framework with consistent convergence is
proposed. The training data preparation, predicting target selection, and
computation platform are illustrated. The framework is then applied to a
canonical separated flow for verification. The mean flow results obtained by
coupling computation of the trained machine learning model and CFD solver have
high consistency with the DNS true values, which proves the validity of the
current approach
Enabling Full-Stack Quantum Computing with Changeable Error-Corrected Qubits
Executing quantum applications with quantum error correction (QEC) faces the
gate non-universality problem imposed by the Eastin-Knill theorem. As one
resource-time-efficient solution, code switching changes the encoding of
logical qubits to implement universal logical gates. Unfortunately, it is still
unclear how to perform full-stack fault-tolerant quantum computing (FTQC) based
on the changeable logical qubit. Specifically, three critical problems remain
unsolved: a) how to implement the dynamic logical qubit on hardware; b) how to
determine the appropriate timing for logical qubit varying; c) how to improve
the overall system performance for programs of different features. To overcome
those design problems, We propose CECQ, to explore the large design space for
FTQC based on changeable logical qubits. Experiments on various quantum
programs demonstrate the effectiveness of CECQ
Optimal Synthesis of Stabilizer Codes via MaxSAT
Quantum Error Correction (QEC) codes are crucial for achieving fault-tolerant
quantum computing in the long term. However, efficiently implementing these
codes on hardware poses significant challenges, including hardware connectivity
matching, efficient circuit scheduling, and fault-tolerance enforcement. In
this study, we present an optimal synthesizer that stitches generic stabilizer
codes onto diverse hardware structures via MaxSAT. Our evaluation demonstrates
(1) the capability of our approach to be applied for various codes and devices
and (2) the consistently better efficiency than the best prior heuristic
approaches that only target specific QEC codes. By bridging the gap between
high-level QEC code design and low-level hardware constraints, this work paves
the way toward achieving long-term fault-tolerant quantum computing goals
Recurrent Contour-based Instance Segmentation with Progressive Learning
Contour-based instance segmentation has been actively studied, thanks to its
flexibility and elegance in processing visual objects within complex
backgrounds. In this work, we propose a novel deep network architecture, i.e.,
PolySnake, for contour-based instance segmentation. Motivated by the classic
Snake algorithm, the proposed PolySnake achieves superior and robust
segmentation performance with an iterative and progressive contour refinement
strategy. Technically, PolySnake introduces a recurrent update operator to
estimate the object contour iteratively. It maintains a single estimate of the
contour that is progressively deformed toward the object boundary. At each
iteration, PolySnake builds a semantic-rich representation for the current
contour and feeds it to the recurrent operator for further contour adjustment.
Through the iterative refinements, the contour finally progressively converges
to a stable status that tightly encloses the object instance. Moreover, with a
compact design of the recurrent architecture, we ensure the running efficiency
under multiple iterations. Extensive experiments are conducted to validate the
merits of our method, and the results demonstrate that the proposed PolySnake
outperforms the existing contour-based instance segmentation methods on several
prevalent instance segmentation benchmarks. The codes and models are available
at https://github.com/fh2019ustc/PolySnake
Masked Collaborative Contrast for Weakly Supervised Semantic Segmentation
This study introduces an efficacious approach, Masked Collaborative Contrast
(MCC), to emphasize semantic regions in weakly supervised semantic
segmentation. MCC adroitly incorporates concepts from masked image modeling and
contrastive learning to devise Transformer blocks that induce keys to contract
towards semantically pertinent regions. Unlike prevalent techniques that
directly eradicate patch regions in the input image when generating masks, we
scrutinize the neighborhood relations of patch tokens by exploring masks
considering keys on the affinity matrix. Moreover, we generate positive and
negative samples in contrastive learning by utilizing the masked local output
and contrasting it with the global output. Elaborate experiments on commonly
employed datasets evidences that the proposed MCC mechanism effectively aligns
global and local perspectives within the image, attaining impressive
performance. The source code is available at
\url{https://github.com/fwu11/MCC}
Compilation for Quantum Computing on Chiplets
Chiplet architecture is an emerging architecture for quantum computing that
could significantly increase qubit resources with its great scalability and
modularity. However, as the computing scale increases, communication between
qubits would become a more severe bottleneck due to the long routing distances.
In this paper, we trade ancillary qubits for program concurrency by proposing a
multi-entry communication highway mechanism, and building a compilation
framework to efficiently manage and utilize the highway resources. Our
evaluation shows that this framework significantly outperforms the baseline
approach in both the circuit depth and the number of operations on some typical
quantum benchmarks, leading to a more efficient and less error-prone
compilation of quantum programs
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Deep Neural Networks (DNNs) are widely used for computer vision tasks.
However, it has been shown that deep models are vulnerable to adversarial
attacks, i.e., their performances drop when imperceptible perturbations are
made to the original inputs, which may further degrade the following visual
tasks or introduce new problems such as data and privacy security. Hence,
metrics for evaluating the robustness of deep models against adversarial
attacks are desired. However, previous metrics are mainly proposed for
evaluating the adversarial robustness of shallow networks on the small-scale
datasets. Although the Cross Lipschitz Extreme Value for nEtwork Robustness
(CLEVER) metric has been proposed for large-scale datasets (e.g., the ImageNet
dataset), it is computationally expensive and its performance relies on a
tractable number of samples. In this paper, we propose the Adversarial
Converging Time Score (ACTS), an attack-dependent metric that quantifies the
adversarial robustness of a DNN on a specific input. Our key observation is
that local neighborhoods on a DNN's output surface would have different shapes
given different inputs. Hence, given different inputs, it requires different
time for converging to an adversarial sample. Based on this geometry meaning,
ACTS measures the converging time as an adversarial robustness metric. We
validate the effectiveness and generalization of the proposed ACTS metric
against different adversarial attacks on the large-scale ImageNet dataset using
state-of-the-art deep networks. Extensive experiments show that our ACTS metric
is an efficient and effective adversarial metric over the previous CLEVER
metric.Comment: ACM Transactions on Multimedia Computing, Communications, and
Applications (ACM TOMM
Biologically Inspired Dynamic Thresholds for Spiking Neural Networks
The dynamic membrane potential threshold, as one of the essential properties
of a biological neuron, is a spontaneous regulation mechanism that maintains
neuronal homeostasis, i.e., the constant overall spiking firing rate of a
neuron. As such, the neuron firing rate is regulated by a dynamic spiking
threshold, which has been extensively studied in biology. Existing work in the
machine learning community does not employ bioinspired spiking threshold
schemes. This work aims at bridging this gap by introducing a novel bioinspired
dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks
(SNNs). The proposed BDETT scheme mirrors two bioplausible observations: a
dynamic threshold has 1) a positive correlation with the average membrane
potential and 2) a negative correlation with the preceding rate of
depolarization. We validate the effectiveness of the proposed BDETT on robot
obstacle avoidance and continuous control tasks under both normal conditions
and various degraded conditions, including noisy observations, weights, and
dynamic environments. We find that the BDETT outperforms existing static and
heuristic threshold approaches by significant margins in all tested conditions,
and we confirm that the proposed bioinspired dynamic threshold scheme offers
homeostasis to SNNs in complex real-world tasks
- …