21,891 research outputs found
The Topological Processor for the future ATLAS Level-1 Trigger: from design to commissioning
The ATLAS detector at LHC will require a Trigger system to efficiently select
events down to a manageable event storage rate of about 400 Hz. By 2015 the LHC
instantaneous luminosity will be increased up to 3 x 10^34 cm-2s-1, this
represents an unprecedented challenge faced by the ATLAS Trigger system. To
cope with the higher event rate and efficiently select relevant events from a
physics point of view, a new element will be included in the Level-1 Trigger
scheme after 2015: the Topological Processor (L1Topo). The L1Topo system,
currently developed at CERN, will consist initially of an ATCA crate and two
L1Topo modules. A high density opto-electroconverter (AVAGO miniPOD) drives up
to 1.6 Tb/s of data from the calorimeter and muon detectors into two high-end
FPGA (Virtex7-690), to be processed in about 200 ns. The design has been
optimized to guarantee excellent signal in- tegrity of the high-speed links and
low latency data transmission on the Real Time Data Path (RTDP). The L1Topo
receives data in a standalone protocol from the calorimeters and muon detectors
to be processed into several VHDL topological algorithms. Those algorithms
perform geometrical cuts, correlations and calculate complex observables such
as the invariant mass. The output of such topological cuts is sent to the
Central Trigger Processor. This talk focuses on the relevant high-density
design characteristic of L1Topo, which allows several hundreds optical links to
processed (up to 13 Gb/s each) using ordinary PCB material. Relevant test
results performed on the L1Topo prototypes to characterize the high-speed links
latency (eye diagram, bit error rate, margin analysis) and the logic resource
utilization of the algorithms are discussed.Comment: 5 pages, 6 figure
Conceptual design of an on-board optical processor with components
The specification of components for a spacecraft on-board optical processor was investigated. A space oriented application of optical data processing and the investigation of certain aspects of optical correlators were examined. The investigation confirmed that real-time optical processing has made significant advances over the past few years, but that there are still critical components which will require further development for use in an on-board optical processor. The devices evaluated were the coherent light valve, the readout optical modulator, the liquid crystal modulator, and the image forming light modulator
Acousto-optic signal processors for transmission and reception of phased-array antenna signals
Novel acousto-optic processors for control and signal processing in phased-array antennas are presented. These processors can operate in both the antenna transmit and receive modes. An experimental acousto-optic processor is demonstrated in the laboratory. This optical technique replaces all the phase-shifting devices required in electronically controlled phased-array antennas
Adaptive acoustooptic filter
A new adaptive filter utilizing acoustooptic devices in a space integrating architecture is described. Two configurations are presented; one of them, suitable for signal estimation, is shown to approximate the Wiener filter, while the other, suitable for detection, is shown to approximate the matched filter
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Hardware for digitally controlled scanned probe microscopes
The design and implementation of a flexible and modular digital control and data acquisition system for scanned probe microscopes (SPMs) is presented. The measured performance of the system shows it to be capable of 14-bit data acquisition at a 100-kHz rate and a full 18-bit output resolution resulting in less than 0.02-Å rms position noise while maintaining a scan range in excess of 1 µm in both the X and Y dimensions. This level of performance achieves the goal of making the noise of the microscope control system an insignificant factor for most experiments. The adaptation of the system to various types of SPM experiments is discussed. Advances in audio electronics and digital signal processors have made the construction of such high performance systems possible at low cost
Synthetic Aperture Radar (SAR) data processing
The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed
A parallel implementation of a multisensor feature-based range-estimation method
There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer
Waveform considerations in space-variant optical processors
The use of coded waveforms in space-variant optical signal processors using coordinate transformations is considered. It is shown that nonlinear transmitted coded signals must be used with such a processor and that this results in novel waveform design and system approaches for radar and communications
- …