22,900 research outputs found
Sparse Attention-Based Neural Networks for Code Classification
Categorizing source codes accurately and efficiently is a challenging problem
in real-world programming education platform management. In recent years,
model-based approaches utilizing abstract syntax trees (ASTs) have been widely
applied to code classification tasks. We introduce an approach named the Sparse
Attention-based neural network for Code Classification (SACC) in this paper.
The approach involves two main steps: In the first step, source code undergoes
syntax parsing and preprocessing. The generated abstract syntax tree is split
into sequences of subtrees and then encoded using a recursive neural network to
obtain a high-dimensional representation. This step simultaneously considers
both the logical structure and lexical level information contained within the
code. In the second step, the encoded sequences of subtrees are fed into a
Transformer model that incorporates sparse attention mechanisms for the purpose
of classification. This method efficiently reduces the computational cost of
the self-attention mechanisms, thus improving the training speed while
preserving effectiveness. Our work introduces a carefully designed sparse
attention pattern that is specifically designed to meet the unique needs of
code classification tasks. This design helps reduce the influence of redundant
information and enhances the overall performance of the model. Finally, we also
deal with problems in previous related research, which include issues like
incomplete classification labels and a small dataset size. We annotated the
CodeNet dataset with algorithm-related labeling categories, which contains a
significantly large amount of data. Extensive comparative experimental results
demonstrate the effectiveness and efficiency of SACC for the code
classification tasks.Comment: 2023 3rd International Conference on Digital Society and Intelligent
Systems (DSInS 2023
Field-based branch prediction for packet processing engines
Network processors have exploited many aspects of architecture design, such as employing multi-core, multi-threading and hardware accelerator, to support both the ever-increasing line rates and the higher complexity of network applications. Micro-architectural techniques like superscalar, deep pipeline and speculative execution provide an excellent method of improving performance without limiting either the scalability or flexibility, provided that the branch penalty is well controlled. However, it is difficult for traditional branch predictor to keep increasing the accuracy by using larger tables, due to the fewer variations in branch patterns of packet processing. To improve the prediction efficiency, we propose a flow-based prediction mechanism which caches the branch histories of packets with similar header fields, since they normally undergo the same execution path. For packets that cannot find a matching entry in the history table, a fallback gshare predictor is used to provide branch direction. Simulation results show that the our scheme achieves an average hit rate in excess of 97.5% on a selected set of network applications and real-life packet traces, with a similar chip area to the existing branch prediction architectures used in modern microprocessors
Computational aerodynamics and artificial intelligence
The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics
Does OO sync with the way we think?
Given that corrective-maintenance costs already dominate the software life cycle and look set to increase significantly, reliability in the form of reducing such costs should be the most important software improvement goal. Yet the results are not promising when we review recent corrective-maintenance data for big systems in general and for OO in particular-possibly because of mismatches between the OO paradigm and how we think
An Efficient Monte Carlo-based Probabilistic Time-Dependent Routing Calculation Targeting a Server-Side Car Navigation System
Incorporating speed probability distribution to the computation of the route
planning in car navigation systems guarantees more accurate and precise
responses. In this paper, we propose a novel approach for dynamically selecting
the number of samples used for the Monte Carlo simulation to solve the
Probabilistic Time-Dependent Routing (PTDR) problem, thus improving the
computation efficiency. The proposed method is used to determine in a proactive
manner the number of simulations to be done to extract the travel-time
estimation for each specific request while respecting an error threshold as
output quality level. The methodology requires a reduced effort on the
application development side. We adopted an aspect-oriented programming
language (LARA) together with a flexible dynamic autotuning library (mARGOt)
respectively to instrument the code and to take tuning decisions on the number
of samples improving the execution efficiency. Experimental results demonstrate
that the proposed adaptive approach saves a large fraction of simulations
(between 36% and 81%) with respect to a static approach while considering
different traffic situations, paths and error requirements. Given the
negligible runtime overhead of the proposed approach, it results in an
execution-time speedup between 1.5x and 5.1x. This speedup is reflected at
infrastructure-level in terms of a reduction of around 36% of the computing
resources needed to support the whole navigation pipeline
A Domain-Specific Language and Editor for Parallel Particle Methods
Domain-specific languages (DSLs) are of increasing importance in scientific
high-performance computing to reduce development costs, raise the level of
abstraction and, thus, ease scientific programming. However, designing and
implementing DSLs is not an easy task, as it requires knowledge of the
application domain and experience in language engineering and compilers.
Consequently, many DSLs follow a weak approach using macros or text generators,
which lack many of the features that make a DSL a comfortable for programmers.
Some of these features---e.g., syntax highlighting, type inference, error
reporting, and code completion---are easily provided by language workbenches,
which combine language engineering techniques and tools in a common ecosystem.
In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL
and development environment for numerical simulations based on particle methods
and hybrid particle-mesh methods. PPME uses the meta programming system (MPS),
a projectional language workbench. PPME is the successor of the Parallel
Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional
implementation strategies. We analyze and compare both languages and
demonstrate how the programmer's experience can be improved using static
analyses and projectional editing. Furthermore, we present an explicit domain
model for particle abstractions and the first formal type system for particle
methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25,
201
- …