4,958 research outputs found
A review of advances in pixel detectors for experiments with high rate and radiation
The Large Hadron Collider (LHC) experiments ATLAS and CMS have established
hybrid pixel detectors as the instrument of choice for particle tracking and
vertexing in high rate and radiation environments, as they operate close to the
LHC interaction points. With the High Luminosity-LHC upgrade now in sight, for
which the tracking detectors will be completely replaced, new generations of
pixel detectors are being devised. They have to address enormous challenges in
terms of data throughput and radiation levels, ionizing and non-ionizing, that
harm the sensing and readout parts of pixel detectors alike. Advances in
microelectronics and microprocessing technologies now enable large scale
detector designs with unprecedented performance in measurement precision (space
and time), radiation hard sensors and readout chips, hybridization techniques,
lightweight supports, and fully monolithic approaches to meet these challenges.
This paper reviews the world-wide effort on these developments.Comment: 84 pages with 46 figures. Review article.For submission to Rep. Prog.
Phy
Trends in Pixel Detectors: Tracking and Imaging
For large scale applications, hybrid pixel detectors, in which sensor and
read-out IC are separate entities, constitute the state of the art in pixel
detector technology to date. They have been developed and start to be used as
tracking detectors and also imaging devices in radiography, autoradiography,
protein crystallography and in X-ray astronomy. A number of trends and
possibilities for future applications in these fields with improved
performance, less material, high read-out speed, large radiation tolerance, and
potential off-the-shelf availability have appeared and are momentarily matured.
Among them are monolithic or semi-monolithic approaches which do not require
complicated hybridization but come as single sensor/IC entities. Most of these
are presently still in the development phase waiting to be used as detectors in
experiments. The present state in pixel detector development including hybrid
and (semi-)monolithic pixel techniques and their suitability for particle
detection and for imaging, is reviewed.Comment: 10 pages, 15 figures, Invited Review given at IEEE2003, Portland,
Oct, 200
Development of CMOS pixel sensors for tracking and vertexing in high energy physics experiments
CMOS pixel sensors (CPS) represent a novel technological approach to building
charged particle detectors. CMOS processes allow to integrate a sensing volume
and readout electronics in a single silicon die allowing to build sensors with
a small pixel pitch () and low material budget () per layer. These characteristics make CPS an attractive option for
vertexing and tracking systems of high energy physics experiments. Moreover,
thanks to the mass production industrial CMOS processes used for the
manufacturing of CPS the fabrication construction cost can be significantly
reduced in comparison to more standard semiconductor technologies. However, the
attainable performance level of the CPS in terms of radiation hardness and
readout speed is mostly determined by the fabrication parameters of the CMOS
processes available on the market rather than by the CPS intrinsic potential.
The permanent evolution of commercial CMOS processes towards smaller feature
sizes and high resistivity epitaxial layers leads to the better radiation
hardness and allows the implementation of accelerated readout circuits. The
TowerJazz CMOS process being one of the most relevant examples
recently became of interest for several future detector projects. The most
imminent of these project is an upgrade of the Inner Tracking System (ITS) of
the ALICE detector at LHC. It will be followed by the Micro-Vertex Detector
(MVD) of the CBM experiment at FAIR. Other experiments like ILD consider CPS as
one of the viable options for flavour tagging and tracking sub-systems
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Advances on CMOS image sensors
This paper offers an introduction to the technological advances of image sensors designed using
complementary metalâoxideâsemiconductor (CMOS) processes along the last decades. We review
some of those technological advances and examine potential disruptive growth directions for CMOS
image sensors and proposed ways to achieve them. Those advances include breakthroughs on
image quality such as resolution, capture speed, light sensitivity and color detection and advances on
the computational imaging. The current trend is to push the innovation efforts even further as the
market requires higher resolution, higher speed, lower power consumption and, mainly, lower cost
sensors. Although CMOS image sensors are currently used in several different applications from
consumer to defense to medical diagnosis, product differentiation is becoming both a requirement and
a difficult goal for any image sensor manufacturer. The unique properties of CMOS process allows the
integration of several signal processing techniques and are driving the impressive advancement of the
computational imaging. With this paper, we offer a very comprehensive review of methods,
techniques, designs and fabrication of CMOS image sensors that have impacted or might will impact
the images sensor applications and markets
Smart-Pixel Cellular Neural Networks in Analog Current-Mode CMOS Technology
This paper presents a systematic approach to design CMOS chips with concurrent picture acquisition and processing capabilities. These chips consist of regular arrangements of elementary units, called smart pixels. Light detection is made with vertical CMOS-BJTâs connected in a Darlington structure. Pixel smartness is achieved by exploiting the Cellular Neural Network paradigm [1], [2], incorporating at each pixel location an analog computing cell which interacts with those of nearby pixels. We propose a current-mode implementation technique and give measurements from two 16 x 16 prototypes in a single-poly double-metal CMOS n-well 1.6-”m technology. In addition to the sensory and processing circuitry, both chips incorporate light-adaptation circuitry for automatic contrast adjustment. They obtain smart-pixel densities up to 89 units/mm2, with a power consumption down to 105 ”W/unit and image processing times below 2 ”s
Infrastructure for Detector Research and Development towards the International Linear Collider
The EUDET-project was launched to create an infrastructure for developing and
testing new and advanced detector technologies to be used at a future linear
collider. The aim was to make possible experimentation and analysis of data for
institutes, which otherwise could not be realized due to lack of resources. The
infrastructure comprised an analysis and software network, and instrumentation
infrastructures for tracking detectors as well as for calorimetry.Comment: 54 pages, 48 picture
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
- âŠ