11,189 research outputs found
AirCode: Unobtrusive Physical Tags for Digital Fabrication
We present AirCode, a technique that allows the user to tag physically
fabricated objects with given information. An AirCode tag consists of a group
of carefully designed air pockets placed beneath the object surface. These air
pockets are easily produced during the fabrication process of the object,
without any additional material or postprocessing. Meanwhile, the air pockets
affect only the scattering light transport under the surface, and thus are hard
to notice to our naked eyes. But, by using a computational imaging method, the
tags become detectable. We present a tool that automates the design of air
pockets for the user to encode information. AirCode system also allows the user
to retrieve the information from captured images via a robust decoding
algorithm. We demonstrate our tagging technique with applications for metadata
embedding, robotic grasping, as well as conveying object affordances.Comment: ACM UIST 2017 Technical Paper
Multidimensional Optical Sensing and Imaging Systems (MOSIS): From Macro to Micro Scales
Multidimensional optical imaging systems for information processing and visualization technologies have numerous applications in fields such as manufacturing, medical sciences, entertainment, robotics, surveillance, and defense. Among different three-dimensional (3-D) imaging methods, integral imaging is a promising multiperspective sensing and display technique. Compared with other 3-D imaging techniques, integral imaging can capture a scene using an incoherent light source and generate real 3-D images for observation without any special viewing devices. This review paper describes passive multidimensional imaging systems combined with different integral imaging configurations. One example is the integral-imaging-based multidimensional optical sensing and imaging systems (MOSIS), which can be used for 3-D visualization, seeing through obscurations, material inspection, and object recognition from microscales to long range imaging. This system utilizes many degrees of freedom such as time and space multiplexing, depth information, polarimetric, temporal, photon flux and multispectral information based on integral imaging to record and reconstruct the multidimensionally integrated scene. Image fusion may be used to integrate the multidimensional images obtained by polarimetric sensors, multispectral cameras, and various multiplexing techniques. The multidimensional images contain substantially more information compared with two-dimensional (2-D) images or conventional 3-D images. In addition, we present recent progress and applications of 3-D integral imaging including human gesture recognition in the time domain, depth estimation, mid-wave-infrared photon counting, 3-D polarimetric imaging for object shape and material identification, dynamic integral imaging implemented with liquid-crystal devices, and 3-D endoscopy for healthcare applications.B. Javidi wishes to acknowledge support by the National
Science Foundation (NSF) under Grant NSF/IIS-1422179, and
DARPA and US Army under contract number
W911NF-13-1-0485. The work of P. Latorre Carmona, A.
MartÃnez-Uso, J. M. Sotoca and F. Pla was supported by the
Spanish Ministry of Economy under the project
ESP2013-48458-C4-3-P, and by MICINN under the project
MTM2013-48371-C2-2-PDGI, by Generalitat Valenciana
under the project PROMETEO-II/2014/062, and by Universitat
Jaume I through project P11B2014-09. The work of M.
MartÃnez-Corral and G. Saavedra was supported by the Spanish
Ministry of Economy and Competitiveness under the grant
DPI2015-66458-C2-1R, and by the Generalitat Valenciana,
Spain under the project PROMETEOII/2014/072
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
Intelligent Multi-channel Meta-imagers for Accelerating Machine Vision
Rapid developments in machine vision have led to advances in a variety of
industries, from medical image analysis to autonomous systems. These
achievements, however, typically necessitate digital neural networks with heavy
computational requirements, which are limited by high energy consumption and
further hinder real-time decision-making when computation resources are not
accessible. Here, we demonstrate an intelligent meta-imager that is designed to
work in concert with a digital back-end to off-load computationally expensive
convolution operations into high-speed and low-power optics. In this
architecture, metasurfaces enable both angle and polarization multiplexing to
create multiple information channels that perform positive and negatively
valued convolution operations in a single shot. The meta-imager is employed for
object classification, experimentally achieving 98.6% accurate classification
of handwritten digits and 88.8% accuracy in classifying fashion images. With
compactness, high speed, and low power consumption, this approach could find a
wide range of applications in artificial intelligence and machine vision
applications.Comment: 15 pages, 5 figure
Separation and contrast enhancement of overlapping cast shadow components using polarization
Shadow is an inseparable aspect of all natural scenes. When there are multiple light sources or multiple reflections several different shadows may overlap at the same location and create complicated patterns. Shadows are a potentially good source of information about a scene if the shadow regions can be properly identified and segmented. However, shadow region identification and segmentation is a difficult task and improperly identified shadows often interfere with machine vision tasks like object recognition and tracking. We propose here a new shadow separation and contrast enhancement method based on the polarization of light. Polarization information of the scene captured by our polarization-sensitive camera is shown to separate shadows from different light sources effectively. Such shadow separation is almost impossible to realize with conventional, polarization-insensitive imaging
ANALYZING THE EFFECT OF POLARIZATION IN IMAGING
Light as the natural element for our life can be characterized by its intensity, wavelength and polarization. The polarization is general characteristic of wave (light, gravitational wave, sound wave etc.) that have the information of their oscillations as well as the reflecting object. Polarization of light could not be viewed naturally by our naked human eyes due to the limitation of capabilities of capturing light on a muscle known as the ciliary muscle. Nowadays, in the computer vision, the polarization is used to determine image segmentation, object and texture recognition. Moreover, in the medical field, polarization is used to allow better the diagnose of skin texture and lesion. This project uses digital image processing technique to analyze the effect of polarization in imaging, which focuses on identifying the textures or patterns of an object. In the polarization on human skin’s imaging, this analysis technique is developed to classify and determine the texture of human skin based on the different races background with the aid of polarized light as well to distinguish between the texture of normal skin and skin lesion
- …