4,765 research outputs found

    Exploring space situational awareness using neuromorphic event-based cameras

    Get PDF
    The orbits around earth are a limited natural resource and one that hosts a vast range of vital space-based systems that support international systems use by both commercial industries, civil organisations, and national defence. The availability of this space resource is rapidly depleting due to the ever-growing presence of space debris and rampant overcrowding, especially in the limited and highly desirable slots in geosynchronous orbit. The field of Space Situational Awareness encompasses tasks aimed at mitigating these hazards to on-orbit systems through the monitoring of satellite traffic. Essential to this task is the collection of accurate and timely observation data. This thesis explores the use of a novel sensor paradigm to optically collect and process sensor data to enhance and improve space situational awareness tasks. Solving this issue is critical to ensure that we can continue to utilise the space environment in a sustainable way. However, these tasks pose significant engineering challenges that involve the detection and characterisation of faint, highly distant, and high-speed targets. Recent advances in neuromorphic engineering have led to the availability of high-quality neuromorphic event-based cameras that provide a promising alternative to the conventional cameras used in space imaging. These cameras offer the potential to improve the capabilities of existing space tracking systems and have been shown to detect and track satellites or ‘Resident Space Objects’ at low data rates, high temporal resolutions, and in conditions typically unsuitable for conventional optical cameras. This thesis presents a thorough exploration of neuromorphic event-based cameras for space situational awareness tasks and establishes a rigorous foundation for event-based space imaging. The work conducted in this project demonstrates how to enable event-based space imaging systems that serve the goals of space situational awareness by providing accurate and timely information on the space domain. By developing and implementing event-based processing techniques, the asynchronous operation, high temporal resolution, and dynamic range of these novel sensors are leveraged to provide low latency target acquisition and rapid reaction to challenging satellite tracking scenarios. The algorithms and experiments developed in this thesis successfully study the properties and trade-offs of event-based space imaging and provide comparisons with traditional observing methods and conventional frame-based sensors. The outcomes of this thesis demonstrate the viability of event-based cameras for use in tracking and space imaging tasks and therefore contribute to the growing efforts of the international space situational awareness community and the development of the event-based technology in astronomy and space science applications

    Shift Estimation Algorithm for Dynamic Sensors With Frame-to-Frame Variation in Their Spectral Response

    Get PDF
    This study is motivated by the emergence of a new class of tunable infrared spectral-imaging sensors that offer the ability to dynamically vary the sensor\u27s intrinsic spectral response from frame to frame in an electronically controlled fashion. A manifestation of this is when a sequence of dissimilar spectral responses is periodically realized, whereby in every period of acquired imagery, each frame is associated with a distinct spectral band. Traditional scene-based global shift estimation algorithms are not applicable to such spectrally heterogeneous video sequences, as a pixel value may change from frame to frame as a result of both global motion and varying spectral response. In this paper, a novel algorithm is proposed and examined to fuse a series of coarse global shift estimates between periodically sampled pairs of nonadjacent frames to estimate motion between consecutive frames; each pair corresponds to two nonadjacent frames of the same spectral band. The proposed algorithm outperforms three alternative methods, with the average error being one half of that obtained by using an equal weights version of the proposed algorithm, one-fourth of that obtained by using a simple linear interpolation method, and one-twentieth of that obtained by using a nai¿ve correlation-based direct method

    Flat panel display signal processing

    Get PDF
    Televisions (TVs) have shown considerable technological progress since their introduction almost a century ago. Starting out as small, dim and monochrome screens in wooden cabinets, TVs have evolved to large, bright and colorful displays in plastic boxes. It took until the turn of the century, however, for the TV to become like a ‘picture on the wall’. This happened when the bulky Cathode Ray Tube (CRT) was replaced with thin and light-weight Flat Panel Displays (FPDs), such as Liquid Crystal Displays (LCDs) or Plasma Display Panels (PDPs). However, the TV system and transmission formats are still strongly coupled to the CRT technology, whereas FPDs use very different principles to convert the electronic video signal to visible images. These differences result in image artifacts that the CRT never had, but at the same time provide opportunities to improve FPD image quality beyond that of the CRT. This thesis presents an analysis of the properties of flat panel displays, their relation to image quality, and video signal processing algorithms to improve the quality of the displayed images. To analyze different types of displays, the display signal chain is described using basic principles common to all displays. The main function of a display is to create visible images (light) from an electronic signal (video), requiring display chain functions like opto-electronic effect, spatial and temporal addressing and reconstruction, and color synthesis. The properties of these functions are used to describe CRT, LCDs, and PDPs, showing that these displays perform the same functions, using different implementations. These differences have a number of consequences, that are further investigated in this thesis. Spatial and temporal aspects, corresponding to ‘static’ and ‘dynamic’ resolution respectively, are covered in detail. Moreover, video signal processing is an essential part of the display signal chain for FPDs, because the display format will in general no longer match the source format. In this thesis, it is investigated how specific FPD properties, especially related to spatial and temporal addressing and reconstruction, affect the video signal processing chain. A model of the display signal chain is presented, and applied to analyze FPD spatial properties in relation to static resolution. In particular, the effect of the color subpixels, that enable color image reproduction in FPDs, is analyzed. The perceived display resolution is strongly influenced by the color subpixel arrangement. When taken into account in the signal chain, this improves the perceived resolution on FPDs, which clearly outperform CRTs in this respect. The cause and effect of this improvement, also for alternative subpixel arrangements, is studied using the display signal model. However, the resolution increase cannot be achieved without video processing. This processing is efficiently combined with image scaling, which is always required in the FPD display signal chain, resulting in an algorithm called ‘subpixel image scaling’. A comparison of the effects of subpixel scaling on several subpixel arrangements shows that the largest increase in perceived resolution is found for two-dimensional subpixel arrangements. FPDs outperform CRTs with respect to static resolution, but not with respect to ‘dynamic resolution’, i.e. the perceived resolution of moving images. Life-like reproduction of moving images is an important requirement for a TV display, but the temporal properties of FPDs cause artifacts in moving images (‘motion artifacts’), that are not found in CRTs. A model of the temporal aspects of the display signal chain is used to analyze dynamic resolution and motion artifacts on several display types, in particular LCD and PDP. Furthermore, video signal processing algorithms are developed that can reduce motion artifacts and increase the dynamic resolution. The occurrence of motion artifacts is explained by the fact that the human visual system tracks moving objects. This converts temporal effects on the display into perceived spatial effects, that can appear in very different ways. The analysis shows how addressing mismatches in the chain cause motion-dependent misalignment of image data, e.g. resulting in the ‘dynamic false contour’ artifact in PDPs. Also, non-ideal temporal reconstruction results in ‘motion blur’, i.e. a loss of sharpness of moving images, which is typical for LCDs. The relation between motion blur, dynamic resolution, and temporal properties of LCDs is analyzed using the display signal model in the temporal (frequency) domain. The concepts of temporal aperture, motion aperture and temporal display bandwidth are introduced, which enable characterization of motion blur in a simple and direct way. This is applied to compare several motion blur reduction methods, based on modified display design and driving. This thesis further describes the development of several video processing algorithms that can reduce motion artifacts. It is shown that the motion of objects in the image plays an essential role in these algorithms, i.e. they require motion estimation and compensation techniques. In LCDs, video processing for motion artifact reduction involves a compensation for the temporal reconstruction characteristics of the display, leading to the ‘motion compensated inverse filtering’ algorithm. The display chain model is used to analyze this algorithm, and several methods to increase its performance are presented. In PDPs, motion artifact reduction can be achieved with ‘motion compensated subfield generation’, for which an advanced algorithm is presented

    Highly efficient low-level feature extraction for video representation and retrieval.

    Get PDF
    PhDWitnessing the omnipresence of digital video media, the research community has raised the question of its meaningful use and management. Stored in immense multimedia databases, digital videos need to be retrieved and structured in an intelligent way, relying on the content and the rich semantics involved. Current Content Based Video Indexing and Retrieval systems face the problem of the semantic gap between the simplicity of the available visual features and the richness of user semantics. This work focuses on the issues of efficiency and scalability in video indexing and retrieval to facilitate a video representation model capable of semantic annotation. A highly efficient algorithm for temporal analysis and key-frame extraction is developed. It is based on the prediction information extracted directly from the compressed domain features and the robust scalable analysis in the temporal domain. Furthermore, a hierarchical quantisation of the colour features in the descriptor space is presented. Derived from the extracted set of low-level features, a video representation model that enables semantic annotation and contextual genre classification is designed. Results demonstrate the efficiency and robustness of the temporal analysis algorithm that runs in real time maintaining the high precision and recall of the detection task. Adaptive key-frame extraction and summarisation achieve a good overview of the visual content, while the colour quantisation algorithm efficiently creates hierarchical set of descriptors. Finally, the video representation model, supported by the genre classification algorithm, achieves excellent results in an automatic annotation system by linking the video clips with a limited lexicon of related keywords

    Encoding and processing of sensory information in neuronal spike trains

    Get PDF
    Recently, a statistical signal-processing technique has allowed the information carried by single spike trains of sensory neurons on time-varying stimuli to be characterized quantitatively in a variety of preparations. In weakly electric fish, its application to first-order sensory neurons encoding electric field amplitude (P-receptor afferents) showed that they convey accurate information on temporal modulations in a behaviorally relevant frequency range (<80 Hz). At the next stage of the electrosensory pathway (the electrosensory lateral line lobe, ELL), the information sampled by first-order neurons is used to extract upstrokes and downstrokes in the amplitude modulation waveform. By using signal-detection techniques, we determined that these temporal features are explicitly represented by short spike bursts of second-order neurons (ELL pyramidal cells). Our results suggest that the biophysical mechanism underlying this computation is of dendritic origin. We also investigated the accuracy with which upstrokes and downstrokes are encoded across two of the three somatotopic body maps of the ELL (centromedial and lateral). Pyramidal cells of the centromedial map, in particular I-cells, encode up- and downstrokes more reliably than those of the lateral map. This result correlates well with the significance of these temporal features for a particular behavior (the jamming avoidance response) as assessed by lesion experiments of the centromedial map

    Employing Feedback to Filter Caustic Waves in Underwater Scenes in Motion

    Get PDF
    A real-time approach for removing sunlight flickers from subaquatic scenarios captured in videos is presented. For this end, a de-flickering filter is designed. The start point is a moving landscape scene. Essentially, the filtering approach is based on morphological characteristics of the caustic waves. It constructs an a-priori de-flickered image which is afterwards enhanced. The algorithm employs feedback of optical flow fields and brightness in order to predict a one-step-ahead value of the brightness.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Employing Feedback to Filter Caustic Waves in Underwater Scenes in Motion

    Get PDF
    A real-time approach for removing sunlight flickers from subaquatic scenarios captured in videos is presented. For this end, a de-flickering filter is designed. The start point is a moving landscape scene. Essentially, the filtering approach is based on morphological characteristics of the caustic waves. It constructs an a-priori de-flickered image which is afterwards enhanced. The algorithm employs feedback of optical flow fields and brightness in order to predict a one-step-ahead value of the brightness.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    An approach to summarize video data in compressed domain

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2007Includes bibliographical references (leaves: 54-56)Text in English; Abstract: Turkish and Englishx, 59 leavesThe requirements to represent digital video and images efficiently and feasibly have collected great efforts on research, development and standardization over past 20 years. These efforts targeted a vast area of applications such as video on demand, digital TV/HDTV broadcasting, multimedia video databases, surveillance applications etc. Moreover, the applications demand more efficient collections of algorithms to enable lower bit rate levels, with acceptable quality depending on application requirements. In our time, most of the video content either stored, transmitted is in compressed form. The increase in the amount of video data that is being shared attracted interest of researchers on the interrelated problems of video summarization, indexing and abstraction. In this study, the scene cut detection in emerging ISO/ITU H264/AVC coded bit stream is realized by extracting spatio-temporal prediction information directly in the compressed domain. The syntax and semantics, parsing and decoding processes of ISO/ITU H264/AVC bit-stream is analyzed to detect scene information. Various video test data is constructed using Joint Video Team.s test model JM encoder, and implementations are made on JM decoder. The output of the study is the scene information to address video summarization, skimming, indexing applications that use the new generation ISO/ITU H264/AVC video
    corecore