25,158 research outputs found
User's guide to SFTRAN/1100
Extensions and improvements were made to SFTRAN, a structured programming language. This language was implemented as a precompiler that translates from SFTRAN to FORTRAN. It was available to batch and conversational users of the UNIVAC 1100 computer system. The SFTRAN language and its use are described. In addition, conversational time-sharing system command subroutines were implemented that eliminated the complications of dealing with extra files and processing steps that the use of a precompiler would otherwise require. These command subroutines are reported, and their use is illustrated by examples
User's guide for SFTRAN/360
Extension and improvements made to SFTRAN, a structured-programming language are discussed. This improved language is implemented as a precompiler that translates from SFTRAN to FORTRAN. The SFTRAN language and its use are described. Time-Sharing System (TSS) command procedures were implemented that eliminate the complications of dealing with extra files and processing steps which the use of a precompiler would otherwise require. These command procedures are described and their use is illustrated by examples
Adaptive Processing of Spatial-Keyword Data Over a Distributed Streaming Cluster
The widespread use of GPS-enabled smartphones along with the popularity of
micro-blogging and social networking applications, e.g., Twitter and Facebook,
has resulted in the generation of huge streams of geo-tagged textual data. Many
applications require real-time processing of these streams. For example,
location-based e-coupon and ad-targeting systems enable advertisers to register
millions of ads to millions of users. The number of users is typically very
high and they are continuously moving, and the ads change frequently as well.
Hence sending the right ad to the matching users is very challenging. Existing
streaming systems are either centralized or are not spatial-keyword aware, and
cannot efficiently support the processing of rapidly arriving spatial-keyword
data streams. This paper presents Tornado, a distributed spatial-keyword stream
processing system. Tornado features routing units to fairly distribute the
workload, and furthermore, co-locate the data objects and the corresponding
queries at the same processing units. The routing units use the Augmented-Grid,
a novel structure that is equipped with an efficient search algorithm for
distributing the data objects and queries. Tornado uses evaluators to process
the data objects against the queries. The routing units minimize the redundant
communication by not sending data updates for processing when these updates do
not match any query. By applying dynamically evaluated cost formulae that
continuously represent the processing overhead at each evaluator, Tornado is
adaptive to changes in the workload. Extensive experimental evaluation using
spatio-textual range queries over real Twitter data indicates that Tornado
outperforms the non-spatio-textually aware approaches by up to two orders of
magnitude in terms of the overall system throughput
Recommended from our members
An intelligent component database for behavioral synthesis
This paper describes an intelligent component database system that delivers components to synthesis tools when given a set of attributes and constraints. Requirements of a component server are defined and an implementation is described. Our experiments demonstrate that such a component sever can replace component libraries and component catalogs with hundreds of pages
Generating Synthetic Data for Neural Keyword-to-Question Models
Search typically relies on keyword queries, but these are often semantically
ambiguous. We propose to overcome this by offering users natural language
questions, based on their keyword queries, to disambiguate their intent. This
keyword-to-question task may be addressed using neural machine translation
techniques. Neural translation models, however, require massive amounts of
training data (keyword-question pairs), which is unavailable for this task. The
main idea of this paper is to generate large amounts of synthetic training data
from a small seed set of hand-labeled keyword-question pairs. Since natural
language questions are available in large quantities, we develop models to
automatically generate the corresponding keyword queries. Further, we introduce
various filtering mechanisms to ensure that synthetic training data is of high
quality. We demonstrate the feasibility of our approach using both automatic
and manual evaluation. This is an extended version of the article published
with the same title in the Proceedings of ICTIR'18.Comment: Extended version of ICTIR'18 full paper, 11 page
- …