100 research outputs found

    Training a machine-learning based object detector for use in photography

    Get PDF
    Cameras and other photography systems include the capability to detect objects of interest. Object detectors trained using sample data have a classification loss, e.g., due to insufficient training owing to a finite number of negative samples used during training. An increase in complexity of the object detector increases running time which necessitates a tradeoff between recall, precision, and speed. A common approach is to minimize the average loss across the entire training database. This disclosure proposes a new framework that takes the final image quality into account while training an object detector, by using a modified loss calculation function for the object detection framework used in photography. The framework enables better decisions regarding the various tradeoffs involved in loss calculation. The loss of classification during training is calculated by comparison of an image captured with and without successful detection of the object. The object detector, trained to take into account the impact of detecting or missing an object on a captured image, can improve the quality of captured images

    Display of surfaces from volume data

    Get PDF
    The application of volume rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not required. Images are formed by directly shading each sample and projecting it onto the picture plane. Surface shading calculations are performed at every voxel with local gradient vectors serving as surface normals. In a separate step, surface classification operators are applied to obtain a partial opacity for every voxel. Operators that detect isovalue contour surfaces and region boundary surfaces are presented. Independence of shading and classification calculations insures an undistorted visualization of 3-D shape. Non-binary classification operators insure that small or poorly defined features are not lost. The resulting colors and opacities are composited from back to front along viewing rays to form an image. The technique is simple and fast, yet displays surfaces exhibiting smooth silhouettes and few other aliasing artifacts. The use of selective blurring and super-sampling to further improve image quality is also described. Examples from two applications are given: molecular graphics and medical imaging

    Decoupling algorithms from schedules for easy optimization of image processing pipelines

    Get PDF
    Using existing programming tools, writing high-performance image processing code requires sacrificing readability, portability, and modularity. We argue that this is a consequence of conflating what computations define the algorithm, with decisions about storage and the order of computation. We refer to these latter two concerns as the schedule, including choices of tiling, fusion, recomputation vs. storage, vectorization, and parallelism. We propose a representation for feed-forward imaging pipelines that separates the algorithm from its schedule, enabling high-performance without sacrificing code clarity. This decoupling simplifies the algorithm specification: images and intermediate buffers become functions over an infinite integer domain, with no explicit storage or boundary conditions. Imaging pipelines are compositions of functions. Programmers separately specify scheduling strategies for the various functions composing the algorithm, which allows them to efficiently explore different optimizations without changing the algorithmic code. We demonstrate the power of this representation by expressing a range of recent image processing applications in an embedded domain specific language called Halide, and compiling them for ARM, x86, and GPUs. Our compiler targets SIMD units, multiple cores, and complex memory hierarchies. We demonstrate that it can handle algorithms such as a camera raw pipeline, the bilateral grid, fast local Laplacian filtering, and image segmentation. The algorithms expressed in our language are both shorter and faster than state-of-the-art implementations.National Science Foundation (U.S.) (Grant 0964004)National Science Foundation (U.S.) (Grant 0964218)National Science Foundation (U.S.) (Grant 0832997)United States. Dept. of Energy (Award DE-SC0005288)Cognex CorporationAdobe System

    Better Optical Triangulation through Spacetime Analysis

    No full text
    The standard methods for extracting range data from optical triangulationscanners are accurate only for planar objects of uniform reflectance illuminated by an incoherent source. Using these methods, curved surfaces, discontinuous surfaces, and surfaces of varying reflectance cause systematic distortions of the range data. Coherent light sources such as lasers introduce speckle artifacts that further degrade the data. We present a new ranging method based on analyzing the time evolution of the structured light reflections. Using our spacetime analysis, we can correct for each of these artifacts, thereby attaining significantly higher accuracy using existing technology. We present results that demonstrate the validity of our method using a commercial laser stripe triangulation scanner. Keywords: Active and real-time vision, low level processing, optical triangulation, laser rangefinding, 3-D scanning i Copyright c fl 1995 Brian Curless and Marc Levoy Better Optical Triangulation thr..
    • …
    corecore