24,704 research outputs found
Control versus Data Flow in Parallel Database Machines
The execution of a query in a parallel database machine can be controlled in either a control flow way, or in a data flow way. In the former case a single system node controls the entire query execution. In the latter case the processes that execute the query, although possibly running on different nodes of the system, trigger each other. Lately, many database research projects focus on data flow control since it should enhance response times and throughput. The authors study control versus data flow with regard to controlling the execution of database queries. An analytical model is used to compare control and data flow in order to gain insights into the question which mechanism is better under which circumstances. Also, some systems using data flow techniques are described, and the authors investigate to which degree they are really data flow. The results show that for particular types of queries data flow is very attractive, since it reduces the number of control messages and balances these messages over the node
DIANet: Dense-and-Implicit Attention Network
Attention networks have successfully boosted the performance in various
vision problems. Previous works lay emphasis on designing a new attention
module and individually plug them into the networks. Our paper proposes a
novel-and-simple framework that shares an attention module throughout different
network layers to encourage the integration of layer-wise information and this
parameter-sharing module is referred as Dense-and-Implicit-Attention (DIA)
unit. Many choices of modules can be used in the DIA unit. Since Long Short
Term Memory (LSTM) has a capacity of capturing long-distance dependency, we
focus on the case when the DIA unit is the modified LSTM (refer as DIA-LSTM).
Experiments on benchmark datasets show that the DIA-LSTM unit is capable of
emphasizing layer-wise feature interrelation and leads to significant
improvement of image classification accuracy. We further empirically show that
the DIA-LSTM has a strong regularization ability on stabilizing the training of
deep networks by the experiments with the removal of skip connections or Batch
Normalization in the whole residual network. The code is released at
https://github.com/gbup-group/DIANet
Digital implementation of the cellular sensor-computers
Two different kinds of cellular sensor-processor architectures are used nowadays in various
applications. The first is the traditional sensor-processor architecture, where the sensor and the
processor arrays are mapped into each other. The second is the foveal architecture, in which a
small active fovea is navigating in a large sensor array. This second architecture is introduced
and compared here. Both of these architectures can be implemented with analog and digital
processor arrays. The efficiency of the different implementation types, depending on the used
CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use
digital implementation rather than analog
Communicating Java Threads
The incorporation of multithreading in Java may be considered a significant part of the Java language, because it provides udimentary facilities for concurrent programming. However, we belief that the use of channels is a fundamental concept for concurrent programming. The channel approach as described in this paper is a realization of a systematic design method for concurrent programming in Java based on the CSP paradigm. CSP requires the availability of a Channel class and the addition of composition constructs for sequential, parallel and alternative processes. The Channel class and the constructs have been implemented in Java in compliance with the definitions in CSP. As a result, implementing communication between processes is facilitated, enabling the programmer to avoid deadlock more easily, and freeing the programmer from synchronization and scheduling constructs. The use of the Channel class and the additional constructs is illustrated in a simple application
FACT -- Operation of the First G-APD Cherenkov Telescope
Since more than two years, the First G-APD Cherenkov Telescope (FACT) is
operating successfully at the Canary Island of La Palma. Apart from its purpose
to serve as a monitoring facility for the brightest TeV blazars, it was built
as a major step to establish solid state photon counters as detectors in
Cherenkov astronomy.
The camera of the First G-APD Cherenkov Telesope comprises 1440 Geiger-mode
avalanche photo diodes (G-APD aka. MPPC or SiPM) for photon detection. Since
properties as the gain of G-APDs depend on temperature and the applied voltage,
a real-time feedback system has been developed and implemented. To correct for
the change introduced by temperature, several sensors have been placed close to
the photon detectors. Their read out is used to calculate a corresponding
voltage offset. In addition to temperature changes, changing current introduces
a voltage drop in the supporting resistor network. To correct changes in the
voltage drop introduced by varying photon flux from the night-sky background,
the current is measured and the voltage drop calculated. To check the stability
of the G-APD properties, dark count spectra with high statistics have been
taken under different environmental conditions and been evaluated.
The maximum data rate delivered by the camera is about 240 MB/s. The recorded
data, which can exceed 1 TB in a moonless night, is compressed in real-time
with a proprietary loss-less algorithm. The performance is better than gzip by
almost a factor of two in compression ratio and speed. In total, two to three
CPU cores are needed for data taking. In parallel, a quick-look analysis of the
recently recorded data is executed on a second machine. Its result is publicly
available within a few minutes after the data were taken.
[...]Comment: 19th IEEE Real-Time Conference, Nara, Japan (2014
A Compilation Target for Probabilistic Programming Languages
Forward inference techniques such as sequential Monte Carlo and particle
Markov chain Monte Carlo for probabilistic programming can be implemented in
any programming language by creative use of standardized operating system
functionality including processes, forking, mutexes, and shared memory.
Exploiting this we have defined, developed, and tested a probabilistic
programming language intermediate representation language we call probabilistic
C, which itself can be compiled to machine code by standard compilers and
linked to operating system libraries yielding an efficient, scalable, portable
probabilistic programming compilation target. This opens up a new hardware and
systems research path for optimizing probabilistic programming systems.Comment: In Proceedings of the 31st International Conference on Machine
Learning (ICML), 201
The Use of HepRep in GLAST
HepRep is a generic, hierarchical format for description of graphics
representables that can be augmented by physics information and relational
properties. It was developed for high energy physics event display applications
and is especially suited to client/server or component frameworks. The GLAST
experiment, an international effort led by NASA for a gamma-ray telescope to
launch in 2006, chose HepRep to provide a flexible, extensible and maintainable
framework for their event display without tying their users to any one graphics
application. To support HepRep in their GUADI infrastructure, GLAST developed a
HepRep filler and builder architecture. The architecture hides the details of
XML and CORBA in a set of base and helper classes allowing physics experts to
focus on what data they want to represent. GLAST has two GAUDI services:
HepRepSvc, which registers HepRep fillers in a global registry and allows the
HepRep to be exported to XML, and CorbaSvc, which allows the HepRep to be
published through a CORBA interface and which allows the client application to
feed commands back to GAUDI (such as start next event, or run some GAUDI
algorithm). GLAST's HepRep solution gives users a choice of client
applications, WIRED (written in Java) or FRED (written in C++ and Ruby), and
leaves them free to move to any future HepRep-compliant event display.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 9 pages pdf, 15 figures. PSN THLT00
Cherenkov Telescope Array Data Management
Very High Energy gamma-ray astronomy with the Cherenkov Telescope Array (CTA)
is evolving towards the model of a public observatory. Handling, processing and
archiving the large amount of data generated by the CTA instruments and
delivering scientific products are some of the challenges in designing the CTA
Data Management. The participation of scientists from within CTA Consortium and
from the greater worldwide scientific community necessitates a sophisticated
scientific analysis system capable of providing unified and efficient user
access to data, software and computing resources. Data Management is designed
to respond to three main issues: (i) the treatment and flow of data from remote
telescopes; (ii) "big-data" archiving and processing; (iii) and open data
access. In this communication the overall technical design of the CTA Data
Management, current major developments and prototypes are presented.Comment: 8 pages, 2 figures, In Proceedings of the 34th International Cosmic
Ray Conference (ICRC2015), The Hague, The Netherlands. All CTA contributions
at arXiv:1508.0589
- âŠ