22,352 research outputs found
Heterogeneous component interactions: Sensors integration into multimedia applications
Resource-constrained embedded and mobile devices are becoming increasingly
common. Since few years, some mobile and ubiquitous devices such as wireless
sensor, able to be aware of their physical environment, appeared. Such devices
enable proposing applications which adapt to user's need according the context
evolution. It implies the collaboration of sensors and software components
which differ on their nature and their communication mechanisms. This paper
proposes a unified component model in order to easily design applications based
on software components and sensors without taking care of their nature. Then it
presents a state of the art of communication problems linked to heterogeneous
components and proposes an interaction mechanism which ensures information
exchanges between wireless sensors and software components
A Power-Aware Framework for Executing Streaming Programs on Networks-on-Chip
Nilesh Karavadara, Simon Folie, Michael Zolda, Vu Thien Nga Nguyen, Raimund Kirner, 'A Power-Aware Framework for Executing Streaming Programs on Networks-on-Chip'. Paper presented at the Int'l Workshop on Performance, Power and Predictability of Many-Core Embedded Systems (3PMCES'14), Dresden, Germany, 24-28 March 2014.Software developers are discovering that practices which have successfully served single-core platforms for decades do no longer work for multi-cores. Stream processing is a parallel execution model that is well-suited for architectures with multiple computational elements that are connected by a network. We propose a power-aware streaming execution layer for network-on-chip architectures that addresses the energy constraints of embedded devices. Our proof-of-concept implementation targets the Intel SCC processor, which connects 48 cores via a network-on- chip. We motivate our design decisions and describe the status of our implementation
Will SDN be part of 5G?
For many, this is no longer a valid question and the case is considered
settled with SDN/NFV (Software Defined Networking/Network Function
Virtualization) providing the inevitable innovation enablers solving many
outstanding management issues regarding 5G. However, given the monumental task
of softwarization of radio access network (RAN) while 5G is just around the
corner and some companies have started unveiling their 5G equipment already,
the concern is very realistic that we may only see some point solutions
involving SDN technology instead of a fully SDN-enabled RAN. This survey paper
identifies all important obstacles in the way and looks at the state of the art
of the relevant solutions. This survey is different from the previous surveys
on SDN-based RAN as it focuses on the salient problems and discusses solutions
proposed within and outside SDN literature. Our main focus is on fronthaul,
backward compatibility, supposedly disruptive nature of SDN deployment,
business cases and monetization of SDN related upgrades, latency of general
purpose processors (GPP), and additional security vulnerabilities,
softwarization brings along to the RAN. We have also provided a summary of the
architectural developments in SDN-based RAN landscape as not all work can be
covered under the focused issues. This paper provides a comprehensive survey on
the state of the art of SDN-based RAN and clearly points out the gaps in the
technology.Comment: 33 pages, 10 figure
pony - The occam-pi Network Environment
Although concurrency is generally perceived to be a `hard' subject, it can in fact be very simple --- provided that the underlying model is simple. The occam-pi parallel processing language provides such a simple yet powerful concurrency model that is based on CSP and the pi-calculus. This paper presents pony, the occam-pi Network Environment. occam-pi and pony provide a new, unified, concurrency model that bridges inter- and intra-processor concurrency. This enables the development of distributed applications in a transparent, dynamic and highly scalable way. The first part of this paper discusses the philosophy behind pony, explains how it is used, and gives a brief overview of its implementation. The second part evaluates pony's performance by presenting a number of benchmarks
UNICS - An Unified Instrument Control System for Small/Medium Sized Astronomical Observatories
Although the astronomy community is witnessing an era of large telescopes,
smaller and medium sized telescopes still maintain their utility being larger
in numbers. In order to obtain better scientific outputs it is necessary to
incorporate modern and advanced technologies to the back-end instruments and to
their interfaces with the telescopes through various control processes. However
often tight financial constraints on the smaller and medium size observatories
limit the scope and utility of these systems. Most of the time for every new
development on the telescope the back-end control systems are required to be
built from scratch leading to high costs and efforts. Therefore a simple, low
cost control system for small and medium size observatory needs to be developed
to minimize the cost and efforts while going for the expansion of the
observatory. Here we report on the development of a modern, multipurpose
instrument control system UNICS (Unified Instrument Control System) to
integrate the controls of various instruments and devices mounted on the
telescope. UNICS consists of an embedded hardware unit called Common Control
Unit (CCU) and Linux based data acquisition and User Interface. The Hardware of
the CCU is built around the Atmel make ATmega 128 micro-controller and is
designed with a back-plane, Master Slave architecture. The Graphical User
Interface (GUI) has been developed based on QT and the back end application
software is based on C/C++. UNICS provides feedback mechanisms which give the
operator a good visibility and a quick-look display of the status and modes of
instruments. UNICS is being used for regular science observations since March
2008 on 2m, f/10 IUCAA Telescope located at Girawali, Pune India.Comment: Submitted to PASP, 10 Pages, 5 figure
A Taxonomy of Workflow Management Systems for Grid Computing
With the advent of Grid and application technologies, scientists and
engineers are building more and more complex applications to manage and process
large data sets, and execute scientific experiments on distributed resources.
Such application scenarios require means for composing and executing complex
workflows. Therefore, many efforts have been made towards the development of
workflow management systems for Grid computing. In this paper, we propose a
taxonomy that characterizes and classifies various approaches for building and
executing workflows on Grids. We also survey several representative Grid
workflow systems developed by various projects world-wide to demonstrate the
comprehensiveness of the taxonomy. The taxonomy not only highlights the design
and engineering similarities and differences of state-of-the-art in Grid
workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
Partial information decomposition as a unified approach to the specification of neural goal functions
In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a âgoal functionâ, of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. âedge filteringâ, âworking memoryâ). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannonâs mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called âcoding with synergyâ, which builds on combining external input and prior knowledge in a synergistic manner. We suggest that this novel goal function may be highly useful in neural information processing
Real-time and fault tolerance in distributed control software
Closed loop control systems typically contain multitude of spatially distributed sensors and actuators operated simultaneously. So those systems are parallel and distributed in their essence. But mapping this parallelism onto the given distributed hardware architecture, brings in some additional requirements: safe multithreading, optimal process allocation, real-time scheduling of bus and network resources. Nowadays, fault tolerance methods and fast even online reconfiguration are becoming increasingly important. All those often conflicting requirements, make design and implementation of real-time distributed control systems an extremely difficult task, that requires substantial knowledge in several areas of control and computer science. Although many design methods have been proposed so far, none of them had succeeded to cover all important aspects of the problem at hand. [1] Continuous increase of production in embedded market, makes a simple and natural design methodology for real-time systems needed more then ever
- âŠ