317 research outputs found

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.

    Full text link
    Process variability is of increasing concern in modern nanometer-scale CMOS. The suitability of Monte Carlo based algorithms for efficient analysis and optimization of digital circuits under variability is explored in this work. Random sampling based Monte Carlo techniques incur high cost of computation, due to the large sample size required to achieve target accuracy. This motivates the need for intelligent sample selection techniques to reduce the number of samples. As these techniques depend on information about the system under analysis, there is a need to tailor the techniques to fit the specific application context. We propose efficient smart sampling based techniques for timing and leakage power consumption analysis of digital circuits. For the case of timing analysis, we show that the proposed method requires 23.8X fewer samples on average to achieve comparable accuracy as a random sampling approach, for benchmark circuits studied. It is further illustrated that the parallelism available in such techniques can be exploited using parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark circuits considered. Next we study the possibility of using such information from statistical analysis to optimize digital circuits under variability, for example to achieve minimum area on silicon though gate sizing while meeting a timing constraint. Though several techniques to optimize circuits have been proposed in literature, it is not clear how much gains are obtained in these approaches specifically through utilization of statistical information. Therefore, an effective lower bound computation technique is proposed to enable efficient comparison of statistical design optimization techniques. It is shown that even techniques which use only limited statistical information can achieve results to within 10% of the proposed lower bound. We conclude that future optimization research should shift focus from use of more statistical information to achieving more efficiency and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd

    Fabrication and Characterisation of 3D Diamond Pixel Detectors With Timing Capabilities

    Get PDF
    Diamond sensors provide a promising radiation hard solution to the challenges posed by the future experiments at hadron machines. A 3D geometry with thin columnar resistive electrodes orthogonal to the diamond surface, obtained by laser nanofabrication, is expected to provide significantly better time resolution with respect to the extensively studied planar diamond sensors. We report on the development, production, and characterisation of innovative 3D diamond sensors achieving 30% improvement in both space and time resolution with respect to sensors from the previous generation. This is the first complete characterisation of the time resolution of 3D diamond sensors and combines results from tests with laser, beta rays and high energy particle beams. Plans and strategies for further improvement in the fabrication technology and readout systems are also discussed

    Die Zeitentwicklung hadronischer Schauer und das T3B Experiment

    Get PDF
    Der Compact Linear Collider (CLIC) ist ein zukünftiger e+e- Beschleuniger mit einer Kollisionsenergie von bis zu 3 TeV und einer Kollisionsrate der Teilchenbündel von 2 GHz. Damit stellt CLIC besondere Anforderungen an ein Gesamtdetektorsystem. Die Akkumulation von Hintergrundereignissen - wie zum Beispiel aus Beamstrahlung resultierende gamma gamma -> Hadronen Interaktionen - soll durch eine zeitaufgelöste Teilchendetektion in allen Subdetektorsystemen minimiert werden. In der Ereignisrekonstruktion wird die präzise Zuordnung von Ereignissen zu einer kleinen Anzahl aufeinanderfolgender Teilchenbündelkollisionen insbesondere durch die Kalorimeter unterstützt indem man Energiedepositionen einen genauen Zeitstempel zuweist. Andererseits ist die Zeitentwicklung von hadronischen Schauern nicht instantan. Die Anforderungen an die Energieauflösung der Kalorimeter machen eine Integration über einen ausgedehnten Zeitraum unabdingbar. Wolfram ist eines der dichtesten Materialien und soll als Absorber verwendet werden um Teilchenschauer auf engstem Raum und innerhalb der Kalorimeter zu stoppen. Gegenwärtig ist die zeitaufgelöste Propagation hadronischer Schauer in Wolfram experimentell jedoch noch nicht hinreichend erforscht. Das T3B Experiment (Tungsten Timing Test Beam) wurde im Rahmen dieser Arbeit entworfen und konstruiert. Es besteht aus einer Kette von 15 Szintillatorkacheln, deren Lichtsignal durch Photosensoren (SiPMs) detektiert und durch Oszilloskope mit einer Abtastrate von 1.25 GHz digitalisiert wird. Das Experiment wurde dafür entwickelt die Zeitstruktur hadronischer Schauer zu vermessen und herauszufinden wie stark verspätete Energiedepositionen innerhalb eines Schauers beitragen. Der T3B Kachelstreifen wurde hinter zwei Prototypen für Hadronenkalorimeter der CALICE Kollaboration montiert, die mit einer Stahl- bzw. Wolframabsorberstruktur ausgestattet waren. Das T3B Experiment hat während der CALICE Teststrahlphase 2010/2011 am PS und SPS des CERN Hadronenschauer in einem Energiebereich von 2-300 GeV zeitlich vermessen. Eine für den Teststrahlbetrieb optimierte Software zur Datennahme wurde neu konzipiert. Die Entwicklung eines neuartigen Softwarealgorithmus zur zeitlichen Dekomposition von SiPM-Signalen erlaubte es, den Detektionszeitpunkt einzelner Photonen und somit Schauer mit einer zeitlichen Präzision von 1 ns zu studieren. Das T3B Experiment konnte eine erhöhte späte Schaueraktivität in Wolfram relativ zu Stahl nachweisen. Hierzu wurde eine detaillierte Untersuchung der Zeitverteilung der Energiedepositionen bemüht. Außerdem wurde beobachtet, dass der relative Einfluss von späten Energiedepositionen radial mit der Distanz zur Schauerachse zunimmt. Diese Zunahme ist in Wolfram wesentlich stärker ausgeprägt als in Stahl. Es konnte nachgewiesen werden, dass das für Simulationen am LHC und für den Großteil der Physikstudien für CLIC standardmäßig verwendete hadronische Schauermodell QGSP_BERT späte Energiedepositionen systematisch überschätzt. Neu entwickelte Modelle mit speziellem Augenmerk auf niederenergetischen Neutronen reproduzieren die Daten besser. Im Bezug auf die Energie einfallender Teilchen in einem Bereich von 60-180 GeV konnten keine signifikanten Unterschiede im Rahmen der Messunsicherheiten nachgewiesen werden.The compact linear collider (CLIC) is a future linear e+e- collider operated at a center of mass energy of up to 3 TeV and with a collision rate of particle bunches of up to 2 GHz. This poses challenging requirements on the detector system. The accumulation of background events, such as gamma gamma -> hadrons resulting from Beamstrahlung, must be minimized through a precise time stamping capability in all subdetector systems. In the event reconstruction, the energy depositions within the calorimeters will be used to assign events precisely to a small set of consecutive bunch crossings. The finite time evolution of hadronic showers, on the other hand, requires an extended integration time to achieve a satisfactory energy resolution in the calorimeter. The energy resolution is also deteriorated by the leakage of shower particles. Tungsten is foreseen as dense absorber material, but the time evolution of hadron showers within such a calorimeter is not sufficiently explored yet. In the context of this thesis, the T3B experiment (short for Tungsten Timing Test Beam) was designed and constructed. It is optimized to measure the time development and the contribution of delayed energy depositions within hadronic cascades. The T3B experiment consists of 15 scintillator cells assembled in a strip. The scintillation light generated within the cells is detected by novel silicon photomultiplier whose signal is read out with fast oscilloscopes providing a sampling rate of 1.25 GHz. This strip was positioned behind two different calorimeter prototypes of the CALICE collaboration which use a tungsten and steel (for comparison) absorber structure. T3B was part of the CALICE test beam campaign 2010/2011 carried out at the PS and SPS at CERN and acquired data on hadronic showers in an energy range of 2-300 GeV. A test beam optimized data acquisition software was developed from scratch. With the development and application of a novel waveform decomposition algorithm, the time of arrival of photons on the light sensor could be determined with sub-nanosecond precision. Embedded in a custom calibration and analysis framework, this allows for a precise study of shower timing on the nanosecond level. The T3B experiment could prove an increased contribution of the delayed shower component in tungsten with respect to steel via a detailed study of the time distribution of energy depositions. In addition, it is observed that the relative importance of late energy depositions increases with radial distance from the shower axis. This increase is substantially more pronounced in tungsten with respect to steel. It could be shown that the standard hadronic shower model QGSP_BERT, used for shower simulations at the LHC as well as for most CLIC physics studies, overestimates the delayed shower evolution systematically, while high precision extensions using precise neutron tracking models can reproduce the shower timing adequately. No significant difference in the delayed shower contribution was observed for different particle energies in a range between 60 GeV and 180 GeV

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    2.5D Chiplet Architecture for Embedded Processing of High Velocity Streaming Data

    Get PDF
    This dissertation presents an energy efficient 2.5D chiplet-based architecture for real-time probabilistic processing of high-velocity sensor data, from an autonomous real-time ubiquitous surveillance imaging system. This work addresses problems at all levels of description. At the lowest physical level, new standard cell libraries have been developed for ultra-low voltage CMOS synthesis, as well as custom SRAM memory blocks, and mixed-signal physical true random number generators based on the perturbation of Sigma-Delta structures using random telegraph noise (RTN) in single transistor devices. At the chip level architecture, an innovative compact buffer-less switched circuit mesh network on chip (NoC) capable of reaching very high throughput (1.6Tbps), finite packet delay delivery, free from packet dropping, and free from dead-locks and live-locks, was designed for this chiplet-based solution. Additionally, a second NoC connecting processors in the network, was implemented based on token-rings, allowing access to external DDR memory. Furthermore, a new clock tree distribution network, and a wide bandwidth DRAM physical interface have been designed to address the data flow requirements within and across chiplets. At the algorithm and representation levels, the Online Change Point Detection (CPD) algorithm has been implemented for on-line learning of background-foreground segmentation. Instead of using traditional binary representation of numbers, this architecture relies on unconventional processing of signals using a bio-inspired (spike-based) unary representation of numbers, where these numbers are represented in a stochastic stream of Bernoulli random variables. By using this representation, probabilistic algorithms can be executed in a native architecture with precision on demand, where if more accuracy is required, more computational time and power can be allocated. The SoC chiplet architecture has been extensively simulated and validated using state of the art CAD methodology, and has been submitted to fabrication in a dedicated 55nm GF CMOS technology wafer run. Experimental results from fabricated test chips in the same technology are also presented

    Developing Clinically Orientated Diffusion-Weighted Magnetic Resonance Imaging of the Brachial Plexus in Adults

    Get PDF
    Introduction (Part 1): The nerves of the brachial plexus control movement and feeling in the upper limb. The most common form of traumatic brachial plexus injury (BPI) is root avulsion. Morphological magnetic resonance imaging (MRI) is used clinically to diagnose root avulsion, but its accuracy remains unclear. Diffusion MRI (dMRI) techniques characterise tissue microstructure and generate proxy measures of nerve ‘health’ which are sensitive to myelination, axon diameter, fibre density and organisation. Part 2: Chapter 1 describes a meta-analysis of 11 studies, showing that conventional (morphological) MRI has modest diagnostic accuracy for diagnosing root avulsion in adults with BPI. This represents the rationale for developing dMRI. Part 2: Chapter 2 shows a clinically viable dMRI sequence which is sensitive to established root avulsion in adults, highlighting uncertainties which warrant investigation before the technique is applied to acutely injured patients. Part 2: Chapter 3 is concerned with modelling the geometry of the brachial plexus in fixed cadavers, to inform the step angle in dMRI processing. Part 2: Chapter 4 is a meta-analysis of 9 studies which defines the normal fractional anisotropy and mean diffusivity values in healthy adult the brachial plexus, and how experimental factors influence dMRI parameter estimates. Part 2: Chapter 5 explores the effect of fractional anisotropy thresholding on deterministic tractography in the brachial plexus, identifying areas of uncertainty in the intrathecal and intraforminal areas. Part 2: Chapter 6 shows that two most common pre-processing pipelines worldwide generate important differences in dMRI parameters and tractograms. Part 2: Chapter 7 deploys high b-value multishell dMRI to show that up to 44% of the brachial plexus has multiple fibre orientations

    ACS Without an Attitude

    Get PDF
    The book (ACS without an Attitude) is an introduction to spacecraft attitude control systems. It is based on a series of lectures that Dr. Hallock presented in the early 2000s to members of the GSFC flight software branch, the target audience being flight software engineers (developers and testers), fairly new to the field that desire an introductory understanding of spacecraft attitude determination and control

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
    corecore