16 research outputs found

    Beyond solid-state lighting: Miniaturization, hybrid integration, and applications og GaN nano- and micro-LEDs

    Get PDF
    Gallium Nitride (GaN) light-emitting-diode (LED) technology has been the revolution in modern lighting. In the last decade, a huge global market of efficient, long-lasting and ubiquitous white light sources has developed around the inception of the Nobel-price-winning blue GaN LEDs. Today GaN optoelectronics is developing beyond lighting, leading to new and innovative devices, e.g. for micro-displays, being the core technology for future augmented reality and visualization, as well as point light sources for optical excitation in communications, imaging, and sensing. This explosion of applications is driven by two main directions: the ability to produce very small GaN LEDs (microLEDs and nanoLEDs) with high efficiency and across large areas, in combination with the possibility to merge optoelectronic-grade GaN microLEDs with silicon microelectronics in a fully hybrid approach. GaN LED technology today is even spreading into the realm of display technology, which has been occupied by organic LED (OLED) and liquid crystal display (LCD) for decades. In this review, the technological transition towards GaN micro- and nanodevices beyond lighting is discussed including an up-to-date overview on the state of the art

    Exploring information retrieval using image sparse representations:from circuit designs and acquisition processes to specific reconstruction algorithms

    Get PDF
    New advances in the field of image sensors (especially in CMOS technology) tend to question the conventional methods used to acquire the image. Compressive Sensing (CS) plays a major role in this, especially to unclog the Analog to Digital Converters which are generally representing the bottleneck of this type of sensors. In addition, CS eliminates traditional compression processing stages that are performed by embedded digital signal processors dedicated to this purpose. The interest is twofold because it allows both to consistently reduce the amount of data to be converted but also to suppress digital processing performed out of the sensor chip. For the moment, regarding the use of CS in image sensors, the main route of exploration as well as the intended applications aims at reducing power consumption related to these components (i.e. ADC & DSP represent 99% of the total power consumption). More broadly, the paradigm of CS allows to question or at least to extend the Nyquist-Shannon sampling theory. This thesis shows developments in the field of image sensors demonstrating that is possible to consider alternative applications linked to CS. Indeed, advances are presented in the fields of hyperspectral imaging, super-resolution, high dynamic range, high speed and non-uniform sampling. In particular, three research axes have been deepened, aiming to design proper architectures and acquisition processes with their associated reconstruction techniques taking advantage of image sparse representations. How the on-chip implementation of Compressed Sensing can relax sensor constraints, improving the acquisition characteristics (speed, dynamic range, power consumption) ? How CS can be combined with simple analysis to provide useful image features for high level applications (adding semantic information) and improve the reconstructed image quality at a certain compression ratio ? Finally, how CS can improve physical limitations (i.e. spectral sensitivity and pixel pitch) of imaging systems without a major impact neither on the sensing strategy nor on the optical elements involved ? A CMOS image sensor has been developed and manufactured during this Ph.D. to validate concepts such as the High Dynamic Range - CS. A new design approach was employed resulting in innovative solutions for pixels addressing and conversion to perform specific acquisition in a compressed mode. On the other hand, the principle of adaptive CS combined with the non-uniform sampling has been developed. Possible implementations of this type of acquisition are proposed. Finally, preliminary works are exhibited on the use of Liquid Crystal Devices to allow hyperspectral imaging combined with spatial super-resolution. The conclusion of this study can be summarized as follows: CS must now be considered as a toolbox for defining more easily compromises between the different characteristics of the sensors: integration time, converters speed, dynamic range, resolution and digital processing resources. However, if CS relaxes some material constraints at the sensor level, it is possible that the collected data are difficult to interpret and process at the decoder side, involving massive computational resources compared to so-called conventional techniques. The application field is wide, implying that for a targeted application, an accurate characterization of the constraints concerning both the sensor (encoder), but also the decoder need to be defined

    Pixels for focal-plane scale space generation and for high dynamic range imaging

    Get PDF
    Focal-plane processing allows for parallel processing throughout the entire pixel matrix, which can help increasing the speed of vision systems. The fabrication of circuits inside the pixel matrix increases the pixel pitch and reduces the fill factor, which leads to reduced image quality. To take advantage of the focal-plane processing capabilities and minimize image quality reduction, we first consider the inclusion of only two extra transistors in the pixel, allowing for scale space generation at the focal plane. We assess the conditions in which the proposed circuitry is advantageous. We perform a time and energy analysis of this approach in comparison to a digital solution. Considering that a SAR ADC per column is used and the clock frequency is equal to 5.6 MHz, the proposed analysis show that the focal-plane approach is 26 times faster if the digital solution uses 10 processing elements, and 49 times more energy-efficient. Another way of taking advantage of the focal-plane signal processing capability is by using focal-plane processing for increasing image quality itself, such as in the case of high dynamic range imaging pixels. This work also presents the design and study of a pixel that captures high dynamic range images by sensing the matrix average luminance, and then adjusting the integration time of each pixel according to the global average and to the local value of the pixel. This pixel was implemented considering small structural variations, such as different photodiode sizes for global average luminance measurement. Schematic and post-layout simulations were performed with the implemented pixel using an input image of 76 dB, presenting results with details in both dark and bright image areas.O processamento no plano focal de imageadores permite que a imagem capturada seja processada em paralelo por toda a matrix de pixels, característica que pode aumentar a velocidade de sistemas de visão. Ao fabricar circuitos dentro da matrix de pixels, o tamanho do pixel aumenta e a razão entre área fotossensível e a área total do pixel diminui, reduzindo a qualidade da imagem. Para utilizar as vantagens do processamento no plano focal e minimizar a redução da qualidade da imagem, a primeira parte da tese propõe a inclusão de dois transistores no pixel, o que permite que o espaço de escalas da imagem capturada seja gerado. Nós então avaliamos em quais condições o circuito proposto é vantajoso. Nós analisamos o tempo de processamento e o consumo de energia dessa proposta em comparação com uma solução digital. Utilizando um conversor de aproximações sucessivas com frequência de 5.6 MHz, a análise proposta mostra que a abordagem no plano focal é 26 vezes mais rápida que o circuito digital com 10 elementos de processamento, e consome 49 vezes menos energia. Outra maneira de utilizar processamento no plano focal consiste em aplicá-lo para melhorar a qualidade da imagem, como na captura de imagens em alta faixa dinâmica. Esta tese também apresenta o estudo e projeto de um pixel que realiza a captura de imagens em alta faixa dinâmica através do ajuste do tempo de integração de cada pixel utilizando a iluminação média e o valor do próprio pixel. Esse pixel foi projetado considerando pequenas variações estruturais, como diferentes tamanhos do fotodiodo que realiza a captura do valor de iluminação médio. Simulações de esquemático e pós-layout foram realizadas com o pixel projetado utilizando uma imagem com faixa dinâmica de 76 dB, apresentando resultados com detalhes tanto na parte clara como na parte escura da imagem

    Holistic Optimization of Embedded Computer Vision Systems

    Full text link
    Despite strong interest in embedded computer vision, the computational demands of Convolutional Neural Network (CNN) inference far exceed the resources available in embedded devices. Thankfully, the typical embedded device has a number of desirable properties that can be leveraged to significantly reduce the time and energy required for CNN inference. This thesis presents three independent and synergistic methods for optimizing embedded computer vision: 1) Reducing the time and energy needed to capture and preprocess input images by optimizing the image capture pipeline for the needs of CNNs rather than humans. 2) Exploiting temporal redundancy within incoming video streams to perform computationally cheap motion estimation and compensation in lieu of full CNN inference for the majority of frames. 3) Leveraging the sparsity of CNN activations within the frequency domain to significantly reduce the number of operations needed for inference. Collectively these techniques significantly reduce the time and energy needed for computer vision at the edge, enabling a wide variety of exciting new applications

    Development of a Nano-Illumination Microscope

    Get PDF
    [eng] This doctoral thesis proposes and explores a new approach to lensless microscopy, focusing on making high resolution imaging ubiquitous and low cost. A short introduction to microscopy frames the state of current techniques: Abbe’s law limits the resolving power for visible light microscopes with lenses, techniques using UV, X-rays, or electrons are incompatible with live samples and all of them, including super-resolution microscopy methods, are complex devices not suitable for being used in the field as mobile devices. Some lensless microscopy methods try to solve these issues. The microscopy method is named Nano Illumination Microscopy (NIM) because it relies on using nanometric light sources in an ordered array to illuminate a sample placed in close proximity to them, and a photodetector at the other side to measure the amount of light arriving from each LED. In a setup like this, the resolving power is provided by the nano-LEDs and their distribution instead of the sensing devices, as is the case in the other methods. Since the resolving power depends on the pitch of the LED array, this method also opens a path to obtain super-resolution images, depending only on obtaining LED arrays with pitches smaller than Abbe’s limit for the wavelength. After the introduction to microscopy setting the context of the thesis, the thesis continues explaining the main components used to build the microscope: a SPAD camera, designed within the context of this work, and the electronics to control the nano-LED array. The third chapter of this thesis provides an overview of the microscopy method and its fundaments, exploring the requirements and capabilities. Image formation is first introduced with simulations, and this information is then used to build the very first prototype, a microscope capable of forming 8x8 pixel images -since that is the form factor of the LED array used, with LEDs of 5 μm in size (and 10 μm in pitch). The first results from this technique are presented and compared with the simulations, showing the agreement between both, validating the method, and offering insight on building the next prototypes, which will use smaller LEDs in an attempt to study the technological limits. The thesis continues with the work done in search of the limits of the technique, building and testing new improved versions of the microscope and confronting the limitations which arise. Some of those came from the structure of the LED arrays themselves: while nano-LEDs well below the sizes used have been reported, those have been isolated structures or non individually addressable. Selecting exactly which LED will emit is one of the main problems to solve since with increasingly large arrays, the connections required increase exponentially until routing is impossible. The thesis also studies this problem, as the LED arrays were changed in search of the proper solution. This implied moving from a direct addressing strategy, in which each LED was selected individually, towards a matrix-addressing format, in which the LEDs are selected by polarising the appropriate row and columns. The microscopy technique is validated and the more advanced prototypes presented. Images with a maximum resolving power of 800 nm are shown, and the results discussed, since the physical limitations on fabricating the chips limit the maximum resolving power below what was theoretically expected. The thesis also offers a short overview into the future of the Nano Illumination Microscopy technique.[cat] Aquesta tesi doctoral proposa i explora una nova aproximació a la microscopia sense lents, amb la intenció de facilitar l’obtenció d’imatges d’alta resolució amb baix cost i disponible arreu. S’ha batejat aquest mètode de microscòpia com a Microscopia de Nano-Il·luminació (MNI) perquè la imatge es construeix a partir de fonts de llum de mida nanomètrica distribuïdes en una matriu que il·luminen la mostra de forma consecutiva i ordenada. Un sensor a l’altre costat recull la intensitat de llum que arriba de cada LED, creant un mapa de l’objecte observat. Aquest mètode fa que la resolució de les imatges depengui de la mida i distribució dels LEDs, en comptes de la del sensor com és el cas convencionalment, obrint la porta a noves integracions. En la tesi s’ofereix una introducció general a la microscòpia abans d’entrar a detallar els components del microscopi i com s’integren per muntar-lo. A continuació es presenta i s’estudia el funcionament del mètode, començant amb simulacions i seguint amb la construcció del primer prototip de microscopi amb el que s’obtenen les primeres imatges. La tesi procedeix a continuació a investigar els límits actuals de la tècnica de microscòpia, utilitzant noves versions de la matriu de LEDs i estratègies alternatives per intentar superar-ne les complicacions tècniques. Així, s’obtenen imatges amb una resolució de 800 nm i es discuteix la problemàtica d’implementar dispositius que s’aproximin a les expectatives teòriques per la tècnica

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori
    corecore