10,882 research outputs found
Microgrid - The microthreaded many-core architecture
Traditional processors use the von Neumann execution model, some other
processors in the past have used the dataflow execution model. A combination of
von Neuman model and dataflow model is also tried in the past and the resultant
model is referred as hybrid dataflow execution model. We describe a hybrid
dataflow model known as the microthreading. It provides constructs for
creation, synchronization and communication between threads in an intermediate
language. The microthreading model is an abstract programming and machine model
for many-core architecture. A particular instance of this model is named as the
microthreaded architecture or the Microgrid. This architecture implements all
the concurrency constructs of the microthreading model in the hardware with the
management of these constructs in the hardware.Comment: 30 pages, 16 figure
Destination directed packet switch architecture for a 30/20 GHz FDMA/TDM geostationary communication satellite network
Emphasis is on a destination directed packet switching architecture for a 30/20 GHz frequency division multiplex access/time division multiplex (FDMA/TDM) geostationary satellite communication network. Critical subsystems and problem areas are identified and addressed. Efforts have concentrated heavily on the space segment; however, the ground segment was considered concurrently to ensure cost efficiency and realistic operational constraints
Time division radio relay synchronizing system using different sync code words for in sync and out of sync conditions Patent
Time division relay synchronizer with master sync pulse for activating binary counter to produce signal identifying time slot for statio
Destination-directed, packet-switching architecture for 30/20-GHz FDMA/TDM geostationary communications satellite network
A destination-directed packet switching architecture for a 30/20-GHz frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary satellite communications network is discussed. Critical subsystems and problem areas are identified and addressed. Efforts have concentrated heavily on the space segment; however, the ground segment has been considered concurrently to ensure cost efficiency and realistic operational constraints
Vulnerability of LTE to Hostile Interference
LTE is well on its way to becoming the primary cellular standard, due to its
performance and low cost. Over the next decade we will become dependent on LTE,
which is why we must ensure it is secure and available when we need it.
Unfortunately, like any wireless technology, disruption through radio jamming
is possible. This paper investigates the extent to which LTE is vulnerable to
intentional jamming, by analyzing the components of the LTE downlink and uplink
signals. The LTE physical layer consists of several physical channels and
signals, most of which are vital to the operation of the link. By taking into
account the density of these physical channels and signals with respect to the
entire frame, as well as the modulation and coding schemes involved, we come up
with a series of vulnerability metrics in the form of jammer to signal ratios.
The ``weakest links'' of the LTE signals are then identified, and used to
establish the overall vulnerability of LTE to hostile interference.Comment: 4 pages, see below for citation. M. Lichtman, J. Reed, M. Norton, T.
Clancy, "Vulnerability of LTE to Hostile Interference'', IEEE Global
Conference on Signal and Information Processing (GlobalSIP), Dec 201
Controllable radio interference for experimental and testing purposes in wireless sensor networks
Abstract—We address the problem of generating customized, controlled interference for experimental and testing purposes in Wireless Sensor Networks. The known coexistence problems between electronic devices sharing the same ISM radio band drive the design of new solutions to minimize interference. The validation of these techniques and the assessment of protocols under external interference require the creation of reproducible and well-controlled interference patterns on real nodes, a nontrivial and time-consuming task. In this paper, we study methods to generate a precisely adjustable level of interference on a specific channel, with lowcost equipment and rapid calibration. We focus our work on the platforms carrying the CC2420 radio chip and we show that, by setting such transceiver in special mode, we can quickly and easily generate repeatable and precise patterns of interference. We show how this tool can be extremely useful for researchers to quickly investigate the behaviour of sensor network protocols and applications under different patterns of interference, and we further evaluate its performance
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
- …