7 research outputs found
IMAGE SCENE RECONSTRUCTION FROM DASHCAM VIDEO
This project presents a method for street scene image reconstruction from recorded driving video. The aim of the project is to combine street scene image along the
road. The image generated will be the left-side and right-side street scenes along the street for two types of road which are straight and bend roads. Main technique applied in this project is the SURF feature based panoramic image stitching. Reconstructed street scene image was validated with subjective Image Quality Assessment. This report discusses on the project background, literature review on related research, methodology, result, discussion and future development of this project. The entire process of the project was
demonstrated using MATLAB. Without the need to install high-end devices, street scene image with satisfying visual summary along the way was reconstructed from captured driving video
Puesta en correspondencia de imágenes aéreas y un ortomapa
Para luchar contra los desastres naturales, especialmente los incendios forestales, se quiere hacer uso de las técnicas de visión por computador y las nuevas tecnologías en general de modo que se pueda geolocalizar elementos detectados sobre el terreno. Una posible manera de realizar esto es determinar a qué zona o punto geográfico corresponde una imagen tomada por un vehículo aéreo no tripulado (UAV), emparejando esta imagen con un ortomapa del cual ya sabemos las coordenadas de cada pixel. Para ello se quiere evaluar si la técnica estándar usada para emparejar dos imágenes sirve para emparejar una imagen con un ortomapa. En este artículo se describe el estado del arte actual de los detectores y descriptores de características locales, describiendo y posteriormente evaluando los más utilizados. Por último se analizan los resultados obtenidos y se determina, tanto para los detectores como para los descriptores, cual se adapta mejor al problema planteado.To combat natural disasters, especially forest fires, we want to use computer vision techniques and the new technologies in general in order to geolocate ground elements. One way to do it is determine wich área or geographic point corresponds an image taken by an unmanned aerial vehicle (UAV), matching this image with an ortomapa which we know the coordinates of each pixel. For this purpose we want to review if the standard technique used to match two images allow to match an image whit an ortomapa. This paper describes the local features detectors and descriptors state of the art, describing and subsequently evaluating the most used. Finally the results are analized and determined for both detectors as descriptors, which is better suited to the proposed problem.Per lluitar contra els desastres naturals, especialment els incendis forestals, es vol fer us de les tècniques de visió per computador i les noves tecnologies en general de manera que es pugui geolocalitzar elements detectats sobre el terreny. Una possible manera de realitzar això és determinar a què zona o punt geogràfic correspon una imatge presa per un vehicle no tripulat (UAV), aparellant aquesta imatge amb un ortomapa del qual ja coneixem les coordenades de cada pixel. Per aquest motiu es vol avaluar si la tècnica estàndard que es fa servir per emparellar dues imatges serveix per emparellar una imatge amb un ortomapa. En aquest article es descriu l'estat de l'art actual dels detectors i descriptors de característiques locals, descrivint i seguidament avaluant els més utilitzats. Per últim s'analitzen els resultats obtinguts i es determina, tant pels detectors com pels descriptors, quin s'adapta millor al problema plantejat
Towards a Strawberry Harvest Prediction System Using Computer Vision and Pattern Recognition
Farmers require advance notice when a harvest is approaching, so they can allocate resources and hire workers as efficiently as possible. Existing methods are subjective and labor intensive, and require the expertise of a professional forecaster. Cal Poly’s EE department has been collaborating with the Cal Poly Strawberry Center to investigate the potential in using digital imaging processing to predict harvests more reliably. This paper shows the progress of that ongoing project, as well as what aspects could still be improved. Three main blocks comprise this system: data acquisition, which obtains and catalogues images of the strawberry plants; computer vision, which extracts information from the images and constructs a time-series model of the field as a whole; and prediction, which uses the field’s history to guess when the most likely harvest window will be. The best method of data acquisition is determined through a decision matrix to be a small autonomous rover. Several challenges specific to images captured via drone, such as fisheye distortion and dirt masking, are examined and mitigated. Using thresholding, the nRGB color space is shown to be the most promising for image segmentation of red strawberries. Data from field 25 at the Cal Poly Strawberry Center is tabulated, analyzed, and compared against industry trends across California. Ultimately, this work serves as a strong benchmark towards a full strawberry yield prediction system
Aerial panoramic image reconstruction for inspection and survey purposes
Debido al aumento de la demanda de aplicaciones relacionadas con la inspección y el reconocimiento en las que se precisa el uso de UAVs, la reconstrucción de imágenes panorámicas se ha convertido en un campo en el que actualmente se investiga muy activamente por los expertos en visión por computador. Por lo tanto, este proyecto tiene como objetivo el desarrollo de un algoritmo capaz de crear imágenes panorámicas, que sea capaz de hacerlo de tal manera que sea un algoritmo: lo suficientemente rápido para trabajar en tiempo real, con el cual se pierda la menor cantidad de información posible, que se pueda integrar fácilmente en el sistema del UAV y que se pueda aprovechar para incorporar en el futuro otras técnicas relacionadas con la detección de objetos o la odometría visual. Para cumplir con los objetivos de trabajo en tiempo real y mínima perdida de información, se propone un método de reconstrucción controlada, en el que se evalúa y se selecciona en todo momento las imágenes que van a formar parte de la panorámica. También, el algoritmo evalúa cuando no se puede continuar con la creación de la panorámica y se debe empezar con otra rápidamente sin perder información. Por último, para cumplir con el objetivo de fácil integración en el sistema, se propone el uso de la estructura ROS, que se basa en el intercambio de mensajes entre diferentes nodos (subsistemas).Due to the rising of application demand related to inspection and survey in which are required UAVs, panoramic image reconstruction has become a field where, currently, computer vision experts actively dive in it. Therefore, the objective of this project is to develop a algorithm cable to create panoramic images, so that it is a algorithm: fast enough to work in real time, in which lower possible data will be lost, simple to integrate within the whole UAV system and functional to include other techniques in the future such as object detection an visual odometry. To reach the objectives related to real time and lower data losing, it is proposed a method of checked reconstruction. In that method, a evaluation and a selection of the capture images is done constantly to find the best image to merge with the panorama. Besides, the algorithm decide the moment in that the reconstruction must stop and a when a new panoramic quicly must be create before losing information. Finally, to reach the objectives related to simple integration, it is proposed using the framework ROS that is based on exchanging messages between diferent nodes.Ingeniería Electrónica Industrial y Automátic
Recommended from our members
Spectral imaging for high-throughput metrology of large-area nanostructure arrays
Modern high-throughput nanopatterning techniques such as nanoimprint lithography make it possible to fabricate arrays of nanostructures (features with dimensions on the 10’s to 100’s of nm scale) over large area substrates (in² to m² scale) such as Si wafers, glass sheets, and flexible roll-to-roll webs. The ability to make such large area nanostructure arrays, or “LNAs” as we will call them, gives birth to an extensive design space enabling a wide array of applications. For instance, LNAs exhibit nanophotonic properties enabling optical devices like wire-grid polarizers (WGPs), transparent conducting metal mesh grids (MMGs), color filters, perfect mirrors, and anti-reflection surfaces. LNAs can also be utilized for increasing surface area as well as generally creating large arrays of discrete features to be utilized as building blocks for electronic components in memory storage devices, sensors, and microprocessors. These unique properties make LNAs immediately attractive to certain industries such as the display and photovoltaic industries. As fabrication methods for LNAs are becoming viable, various industries are becoming interested in pursuing high-volume manufacturing of LNAs for these applications. Unfortunately, metrology methods are currently rudimentary outside of the silicon integrated circuits industry, impeding manufacturing scalability in applications such as displays and photovoltaics. Metrology is essential in the manufacturing context, because it provides invaluable feedback on the success of the fabrication process, both during new process development and large-scale production by tracking of device quality metrics, including performance and reliability metrics, and enables classification of defects that cause devices to not achieve desired quality metrics. Traditional nanometrology methods have fundamental issues which make their applicability to LNA manufacturing difficult. In particular, their low throughput is a major deal-breaker. Fortunately, the nanophotonic properties of LNAs offer a convenient basis for metrology which offers the potential to bridge the gap between the macro and nano scales. This is because the nanophotonic properties of LNAs are inherently geometry dependent, meaning that the optical effects observed from LNAs on the macroscale give direct insight into what is happening on the nanoscale. These optical properties can be characterized using spectral imaging methods such as RGB color imaging, multispectral imaging, and hyperspectral imaging. The throughput of these systems can be extremely high relative to traditional metrology approaches. For instance, a hyperspectral imaging system, when optimized, can achieve throughput of 2.6 m²/hr with 61 spectral bands (wavelength centers of 400 to 700 nm in steps of 5 nm) and a resolution of 10 x 10 µm. An RGB imaging system can achieve an even higher throughput of 15.3 m²/hr. The 10 x 10 µm lateral resolution is often adequate for display and photovoltaic applications. The high throughput makes this approach is incredibly attractive. In this dissertation, we show how spectral imaging techniques can be applied to metrology characterization tasks including defect detection and classification as well as providing a geometric measurement capability via a technique called optical critical dimension (OCD) scatterometry. In this work, we utilize exemplar manufacturing methods, namely JFIL nanoimprint lithography, to create a variety of exemplar LNAs on which we demonstrate the various metrology capabilities of spectral imaging. These LNAs include plasma etched vertical Si nanopillar arrays, metal assisted chemical etching (MACE) vertical Si nanowire arrays, WGPs, and MMGs. Each of these devices has unique manufacturing processes, and we show how the various manufacturing process steps can create a variety of different defects. Naturally, many of the defects originate in the nanoimprint process which lithographically defines the features. We show how defects like particle contamination, non-filling, residual layer thickness (RLT) variations, and adhesion failure uniquely manifest as changes in the optical signatures of the LNAs and use this principle to provide a basis for defect detection. Then, we show how image processing methods can be used to classify what types of defects have occurred over large areas such as wafer scale. Furthermore, we demonstrate that spectral imaging can be used as a geometric metrology using the OCD method, and show how hyperspectral imaging, in particular, can provide geometric measurement on wafer scale areas. The large field of view (FOV), high spatial resolution, and high speed offered by the spectral imaging approach allows for identification of a variety of interesting defect signatures that would be difficult, or nearly impossible, to observe using other metrology approaches. Finally, we discuss ongoing development of a spectral imaging system for roll-to-roll (R2R) LNA manufacturing. Construction of this system will begin in the months following this dissertation and will primarily be applied to manufacturing of WGPs and MMGs on R2R. In summary, these demonstrations are intended to serve as a demonstration of the use of spectral imaging wherever possible in LNA manufacturing. Naturally, this requires that the LNAs being manufacturing exhibit significant enough optical effects for the approach to work, but when this is the case, the advantages of the approach appear outstanding and thus have the potential to be utilized in volume manufacturing of LNAs.Mechanical Engineerin
Study on Advanced Riding Panoramic Bird-Eye's View Systems
在現今的社會,智慧車產業的蓬勃,帶動汽車電子化的科技日益發展,越來越多關於車用的輔助系統被發展出來,或是開發出更智慧的車用配備,像是各項的感測器。本論文提出利用鳥瞰影像的方式來呈現另外一種的環景影像,提供駕駛者更完整的路況車況。
此系統利用架設在騎乘載具四周的攝影機,取得影像來源,藉由此四張不同方向的影像來完成鳥瞰式環景影像。此系統除了傳統的鳥瞰影像,還有跟以往不同的鳥瞰方式,利用不同視角的方式來呈現不同的鳥瞰影像,可以提供不一樣的視覺感知,讓用路人可以收到不同的車況路況資訊,盡可能的提前預防可以會發生的危險,達到安全的駕駛。此研究目標有三大設計挑戰以及研究課題: 〈一〉從一般影像到鳥瞰影像的形成方法;〈二〉利用鳥瞰影像完成環景影像的方法;〈三〉即時的影像輸出,由於攝影機是採用廣角鏡頭,所以利用棋盤格的方式來校正影像,徑而再利用影像變形來取得俯視影像,接著使用影像融合的方法來完成鳥瞰環景影像。
根據實驗結果,利用旋轉影像以及投影的方式就可以達到俯視的效果,更進一步的得到後俯視的效果,藉由騎乘載具與影像的相對位置得到需要接合的影像,採用多層影像融合的方法來完成環景影像,最後獲得的影像與一般的俯視環景影像有所不同,本論文稱為後俯視影像,其影像可以呈現的影像資訊與一般的有所不同,可以讓駕駛者得到不同的資訊,進而能更安全的駕駛。目錄
誌謝 i
摘要 ii
Abstract iii
目錄 iv
圖目錄 vii
表目錄 x
第一章 引言 1
一、 汽車輔助系統 1
二、 全景系統 3
三、 論文概述 5
第二章 文獻探討 6
一、 影像校正 6
(一). A Flexible New Technique for Camera Calibration[2] 6
二、 影像拼接 9
(一). Feature-based panoramic image stitching[3] 9
(二). Panorama Mosaic Optimization for Mobile Camera Systems [4] 12
三、 俯視影像 14
(一). An Imaging Method for 360-Degree Panoramic Bird-Eye View[6] 14
(二). Topview Transform Model for the vehicle parking assistance system[7] 15
四、 影像融合 19
(一). Efficient Video Stitching Based on Fast Structure Deformation[8] 19
第三章 先進騎乘鳥瞰環景影像演算法 20
一、 相機校正(Camera calibration) 22
(一). 相機模型(Camera model) 22
二、 影像變形(Image Warping) 29
(一) 影像旋轉(Image Rotation) 29
(二) 平面投影轉換(Homography)[16] 31
(三). 影像變形(Image Warping) 40
三、 影像定位(Image Registration) 44
(一). 影像接縫(Image Seamless) 45
四、 影像融合(Image Blending) 47
(一). 羽毛影像融合(Feather Blending)[31] 47
(二). 多層融合(multi-band blending)[32][33][34] 48
五、 影像合併(Image Composing) 52
第四章 成果與討論 54
一、 影像擷取與位置擺設 54
二、 成果展示 56
(一). 戶外 56
(二). 戶外 57
(三). 戶外 58
(四). 戶外 59
(五). 戶外 60
三、 成果討論 61
(一) 演算法的執行時間 61
(二). 數據量化與比較 62
第五章 結論與未來展望 66
一、 結論 66
二、 未來展望 66
參考文獻 6