459,913 research outputs found

    ACE16K: A 128×128 focal plane analog processor with digital I/O

    Get PDF
    This paper presents a new generation 128×128 focal-plane analog programmable array processor (FPAPAP), from a system level perspective, which has been manufactured in a 0.35 μm standard digital 1P-5M CMOS technology. The chip has been designed to achieve the high-speed and moderate-accuracy (8b) requirements of most real time early-vision processing applications. It is easily embedded in conventional digital hosting systems: external data interchange and control are completely digital. The chip contains close to four millions transistors, 90% of them working in analog mode, and exhibits a relatively low power consumption-<4 W, i.e. less than 1 μW per transistor. Computing vs. power peak values are in the order of 1 TeraOPS/W, while maintained VGA processing throughputs of 100 frames/s are possible with about 10-20 basic image processing tasks on each frame

    Reconstruction of shale image based on Wasserstein Generative Adversarial Networks with gradient penalty

    Get PDF
     Generative Adversarial Networks (GANs), as most popular artificial intelligence models in the current image generation field, have excellent image generation capabilities. Based on Wasserstein GANs with gradient penalty, this paper proposes a novel digital core reconstruction method. First, a convolutional neural network is used as a generative network to learn the distribution of real shale samples, and then a convolutional neural network is constructed as a discriminative network to distinguish reconstructed shale samples from real ones. Through this confrontation training method, realistic digital core samples of shale can be reconstructed. The paper uses two-point covariance function, Frechet Inception Distance and Kernel Inception Distance, to evaluate the quality of digital core samples of shale reconstructed by GANs. The results show that the covariance function can test the similarity between generated and real shale samples, and that GANs can efficiently reconstruct digital core samples of shale with high-quality. Compared with multiple point statistics, the new method does not require prior inference of the probability distribution of the training data, and directly uses noise vector to generate digital core samples of shale without using constraints of "hard data" in advance. It is easy to produce an unlimited number of new samples. Furthermore, the training time is also shorter, only 4 hours in this paper. Therefore, the new method has some good points compared with current methods.Cited as: Zha, W., Li, X., Xing, Y., He, L., Li, D. Reconstruction of shale image based on Wasserstein Generative Adversarial Networks with gradient penalty. Advances in Geo-Energy Research, 2020, 4(1): 107-114, doi: 10.26804/ager.2020.01.1

    Real-time Image Generation for Compressive Light Field Displays

    Get PDF
    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.United States. Defense Advanced Research Projects Agency. Soldier Centric Imaging via Computational CamerasNational Science Foundation (U.S.) (Grant IIS-1116452)United States. Defense Advanced Research Projects Agency. Maximally scalable Optical Sensor Array Imaging with Computation ProgramAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    Overview of ghost correction for HDR video stream generation

    No full text
    International audienceMost digital cameras use low dynamic range image sensors, these LDR sensors can capture only a limited luminance dynamic range of the scene[1], to about two orders of magnitude (about 256 to 1024 levels). However, the dynamic range of real-world scenes varies over several orders of magnitude (10.000 levels). To overcome this limitation, several methods exist for creating high dynamic range (HDR) image (expensive method uses dedicated HDR image sensor and low-cost solutions using a conventional LDR image sensor). Large number of low-cost solutions applies a temporal exposure bracketing. The HDR image may be constructed with a HDR standard method (an additional step called tone mapping is required to display the HDR image on conventional system), or by fusing LDR images in different exposures time directly, providing HDR-like[2] images which can be handled directly by LDR image monitors. Temporal exposure bracketing solution is used for static scenes but it cannot be applied directly for dynamic scenes or HDR videos since camera or object motion in bracketed exposures creates artifacts called ghost[3], in HDR image. There are a several technics allowing the detection and removing ghost artifacts (Variance based ghost detection, Entropy based ghost detection, Bitmap based ghost detection, Graph-Cuts based ghost detection …) [4], nevertheless most of these methods are expensive in calculating time and they cannot be considered for real-time implementations. The originality and the final goal of our work are to upgrade our current smart camera allowing HDR video stream generation with a sensor full-resolution (1280x1024) at 60 fps [5]. The HDR stream is performed using exposure bracketing techniques (obtained with conventional LDR image sensor) combined with a tone mapping algorithm. In this paper, we propose an overview of the different methods to correct ghost artifacts which are available in the state of art. The selection of algorithms is done concerning our final goal which is real-time hardware implementation of the ghost detection and removing phases.

    Figure, Figurality and Visual Representation of Human and Humanity in the First Decade of 21st Century Photojournalism

    Get PDF
    Thesis reflects on the notion of photojournalism, media and communication processes in the era of Internet and redeveloped global cultural exchange opportunities. Subject of authenticity in the news coverage and digital manipulation of the image contents – used under various conditions depending on intention – become the current social problem that it’s forced routinely and which demands a broader spectrum of understanding. Authenticity, disinformation and the corresponding fact of denial of the real, are important aspects in the complex formation of cultural identities and their shaping. Photographic stories of trauma and foreign poverties become approached with critical concerns in this writing. Power of the news media and the image, and the loose of their intended tendencies in the time of the generation of social media and Internet, are examined with critical and objective perspectives

    Pencil-Beam Surveys for Trans-Neptunian Objects: Novel Methods for Optimization and Characterization

    Full text link
    Digital co-addition of astronomical images is a common technique for increasing signal-to-noise and image depth. A modification of this simple technique has been applied to the detection of minor bodies in the Solar System: first stationary objects are removed through the subtraction of a high-SN template image, then the sky motion of the Solar System bodies of interest is predicted and compensated for by shifting pixels in software prior to the co-addition step. This "shift-and-stack" approach has been applied with great success in directed surveys for minor Solar System bodies. In these surveys, the shifts have been parameterized in a variety of ways. However, these parameterizations have not been optimized and in most cases cannot be effectively applied to data sets with long observation arcs due to objects' real trajectories diverging from linear tracks on the sky. This paper presents two novel probabilistic approaches for determining a near-optimum set of shift-vectors to apply to any image set given a desired region of orbital space to search. The first method is designed for short observational arcs, and the second for observational arcs long enough to require non-linear shift-vectors. Using these techniques and other optimizations, we derive optimized grids for previous surveys that have used "shift-and-stack" approaches to illustrate the improvements that can be made with our method, and at the same time derive new limits on the range of orbital parameters these surveys searched. We conclude with a simulation of a future applications for this approach with LSST, and show that combining multiple nights of data from such next-generation facilities is within the realm of computational feasibility.Comment: Accepted for publication in PASP March 1, 2010

    DIRSIG digital imaging and remote sensing imaging generation model: Infrared airborne validation & input parameter analysis

    Get PDF
    The civilian and military need for high resolution infrared imagery has dramatically increased in recent times. Regardless of the user or the need, infrared imagery can provide unique information that is not available in the visible region of the electromagnetic spectrum. Just as the need for real infrared imagery has increased, so has the need for computer generated infrared imagery, also known as synthetic imagery. Synthetic imagery is created by mathematically modeling the real world and the imaging chain, encompassing everything from the target to the sensor characteristics. The amount of faith that can be placed in a synthetic image depends on its accuracy in recreating the real world. The Digital Imaging and Remote Sensing Image Generation Model (DIRSIG) at the Rochester Institute of Technology (RIT) attempts to model the real world. It creates synthetic images through the integration of scene geometry, ray-tracer, thermal, radiometry, and sensor submodels. The focus of this project lies in evaluating the ability of DIRSIG to recreate the imaging chain and produce high resolution synthetic imagery. DIRSIG synthetic imagery of the Kodak Hawkeye plant and the surrounding area was compared to aerial infrared imagery of the same region using root mean square error and rank order correlation. This comparison helped to validate the output from DIRSIG and detect inadequacies in the image chain model. In addition to validating DIRSIG, a procedure for optimizing the input parameters, incorporating a sensitivity analysis, was developed. This reduces the time involved in creating a realistic and accurate synthetic image
    corecore