1,268 research outputs found
Declarative Ajax Web Applications through SQL++ on a Unified Application State
Implementing even a conceptually simple web application requires an
inordinate amount of time. FORWARD addresses three problems that reduce
developer productivity: (a) Impedance mismatch across the multiple languages
used at different tiers of the application architecture. (b) Distributed data
access across the multiple data sources of the application (SQL database, user
input of the browser page, session data in the application server, etc). (c)
Asynchronous, incremental modification of the pages, as performed by Ajax
actions.
FORWARD belongs to a novel family of web application frameworks that attack
impedance mismatch by offering a single unifying language. FORWARD's language
is SQL++, a minimally extended SQL. FORWARD's architecture is based on two
novel cornerstones: (a) A Unified Application State (UAS), which is a virtual
database over the multiple data sources. The UAS is accessed via distributed
SQL++ queries, therefore resolving the distributed data access problem. (b)
Declarative page specifications, which treat the data displayed by pages as
rendered SQL++ page queries. The resulting pages are automatically
incrementally modified by FORWARD. User input on the page becomes part of the
UAS.
We show that SQL++ captures the semi-structured nature of web pages and
subsumes the data models of two important data sources of the UAS: SQL
databases and JavaScript components. We show that simple markup is sufficient
for creating Ajax displays and for modeling user input on the page as UAS data
sources. Finally, we discuss the page specification syntax and semantics that
are needed in order to avoid race conditions and conflicts between the user
input and the automated Ajax page modifications.
FORWARD has been used in the development of eight commercial and academic
applications. An alpha-release web-based IDE (itself built in FORWARD) enables
development in the cloud.Comment: Proceedings of the 14th International Symposium on Database
Programming Languages (DBPL 2013), August 30, 2013, Riva del Garda, Trento,
Ital
Making an Embedded DBMS JIT-friendly
While database management systems (DBMSs) are highly optimized, interactions
across the boundary between the programming language (PL) and the DBMS are
costly, even for in-process embedded DBMSs. In this paper, we show that
programs that interact with the popular embedded DBMS SQLite can be
significantly optimized - by a factor of 3.4 in our benchmarks - by inlining
across the PL / DBMS boundary. We achieved this speed-up by replacing parts of
SQLite's C interpreter with RPython code and composing the resulting
meta-tracing virtual machine (VM) - called SQPyte - with the PyPy VM. SQPyte
does not compromise stand-alone SQL performance and is 2.2% faster than SQLite
on the widely used TPC-H benchmark suite.Comment: 24 pages, 18 figure
Contrastive Prompt Learning-based Code Search based on Interaction Matrix
Code search aims to retrieve the code snippet that highly matches the given
query described in natural language. Recently, many code pre-training
approaches have demonstrated impressive performance on code search. However,
existing code search methods still suffer from two performance constraints:
inadequate semantic representation and the semantic gap between natural
language (NL) and programming language (PL). In this paper, we propose CPLCS, a
contrastive prompt learning-based code search method based on the cross-modal
interaction mechanism. CPLCS comprises:(1) PL-NL contrastive learning, which
learns the semantic matching relationship between PL and NL representations;
(2) a prompt learning design for a dual-encoder structure that can alleviate
the problem of inadequate semantic representation; (3) a cross-modal
interaction mechanism to enhance the fine-grained mapping between NL and PL. We
conduct extensive experiments to evaluate the effectiveness of our approach on
a real-world dataset across six programming languages. The experiment results
demonstrate the efficacy of our approach in improving semantic representation
quality and mapping ability between PL and NL
Enhancing Semantic Code Search with Multimodal Contrastive Learning and Soft Data Augmentation
Code search aims to retrieve the most semantically relevant code snippet for
a given natural language query. Recently, large-scale code pre-trained models
such as CodeBERT and GraphCodeBERT learn generic representations of source code
and have achieved substantial improvement on code search task. However, the
high-quality sequence-level representations of code snippets have not been
sufficiently explored. In this paper, we propose a new approach with multimodal
contrastive learning and soft data augmentation for code search. Multimodal
contrastive learning is used to pull together the representations of code-query
pairs and push apart the unpaired code snippets and queries. Moreover, data
augmentation is critical in contrastive learning for learning high-quality
representations. However, only semantic-preserving augmentations for source
code are considered in existing work. In this work, we propose to do soft data
augmentation by dynamically masking and replacing some tokens in code sequences
to generate code snippets that are similar but not necessarily
semantic-preserving as positive samples for paired queries. We conduct
extensive experiments to evaluate the effectiveness of our approach on a
large-scale dataset with six programming languages. The experimental results
show that our approach significantly outperforms the state-of-the-art methods.
We also adapt our techniques to several pre-trained models such as RoBERTa and
CodeBERT, and significantly boost their performance on the code search task
- …