1,001 research outputs found

    Dual-layered and wavelength-multiplexed optical barcode for high data storage

    Get PDF
    A novel barcode system design to achieve high data storage using more than one layer is introduced theoretically and tested partially in the laboratory. Compared to other existing barcode systems, diffraction gratings are used as core elements in the barcode symbol. As any other barcode system, the novel model requires a source of light, the barcode symbol and photodiode detectors. Theoretical background from optics has been used to design the entire system along with all the positioning of its components. After part-testing the design in laboratory, the barcode system design has been changed to achieve better results. Experiments have showed that the initial proposed Light Emitting Diode (LED) source light cannot deliver 5mm spot light over a range of 50cm and therefore, white Light Amplification by Stimulated Emission of Radiation (LASER) light has been adopted as replacement. The diffractions from the barcode symbol are captured by detectors built with SI photo diodes, which are designed to detect this range of wavelengths. The barcode symbol is composed of small 5mm by 5mm grating modules and the largest possible symbol size defined is 80 modules (5cmx5cm). Experimental works have proved that intensity of the light can be used to uniquely identify each grating rather than the entire spectrum diffracted. A better design is proposed where the detectors are positioned under the barcode symbol and capture the light intensity of the first diffracted order. Theoretical investigations state that diffraction gratings with different lines per mm diffract different sets of wavelengths spectrum. This characteristic allows a set of unique gratings to be used in the barcode symbol which hence allow data to be represented or stored. Character (Char) sets are defined to help encode and decode data in the barcode symbol. High data storage has been achieved through the use of two layers. Multiple layers offer the possibility to increase the number of unique sets of gratings which in turn increase the data representation capacity. Using two layers with 16 unique sets of gratings has proved to be able to store around 100 bytes of data. The system has the potential to use more than two layers and using 4 layers with 16 unique gratings per layer will achieve 200 bytes. The thesis has proved through theoretical and experimental work that diffraction gratings can be used in barcode system to represent data and multiple layers adds the benefit of increasing data storage. Further work is also suggested

    Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture.</p> <p>Methods</p> <p>Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale <it>in silico </it>image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM).</p> <p>Results</p> <p>We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates.</p> <p>Conclusion</p> <p>The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.</p

    Realistic visualisation of cultural heritage objects

    Get PDF
    This research investigation used digital photography in a hemispherical dome, enabling a set of 64 photographic images of an object to be captured in perfect pixel register, with each image illuminated from a different direction. This representation turns out to be much richer than a single 2D image, because it contains information at each point about both the 3D shape of the surface (gradient and local curvature) and the directionality of reflectance (gloss and specularity). Thereby it enables not only interactive visualisation through viewer software, giving the illusion of 3D, but also the reconstruction of an actual 3D surface and highly realistic rendering of a wide range of materials. The following seven outcomes of the research are claimed as novel and therefore as representing contributions to knowledge in the field: A method for determining the geometry of an illumination dome; An adaptive method for finding surface normals by bounded regression; Generating 3D surfaces from photometric stereo; Relationship between surface normals and specular angles; Modelling surface specularity by a modified Lorentzian function; Determining the optimal wavelengths of colour laser scanners; Characterising colour devices by synthetic reflectance spectra

    3D Modelling from Real Data

    Get PDF
    The genesis of a 3D model has basically two definitely different paths. Firstly we can consider the CAD generated models, where the shape is defined according to a user drawing action, operating with different mathematical “bricks” like B-Splines, NURBS or subdivision surfaces (mathematical CAD modelling), or directly drawing small polygonal planar facets in space, approximating with them complex free form shapes (polygonal CAD modelling). This approach can be used for both ideal elements (a project, a fantasy shape in the mind of a designer, a 3D cartoon, etc.) or for real objects. In the latter case the object has to be first surveyed in order to generate a drawing coherent with the real stuff. If the surveying process is not only a rough acquisition of simple distances with a substantial amount of manual drawing, a scene can be modelled in 3D by capturing with a digital instrument many points of its geometrical features and connecting them by polygons to produce a 3D result similar to a polygonal CAD model, with the difference that the shape generated is in this case an accurate 3D acquisition of a real object (reality-based polygonal modelling). Considering only device operating on the ground, 3D capturing techniques for the generation of reality-based 3D models may span from passive sensors and image data (Remondino and El-Hakim, 2006), optical active sensors and range data (Blais, 2004; Shan & Toth, 2008; Vosselman and Maas, 2010), classical surveying (e.g. total stations or Global Navigation Satellite System - GNSS), 2D maps (Yin et al., 2009) or an integration of the aforementioned methods (Stumpfel et al., 2003; Guidi et al., 2003; Beraldin, 2004; Stamos et al., 2008; Guidi et al., 2009a; Remondino et al., 2009; Callieri et al., 2011). The choice depends on the required resolution and accuracy, object dimensions, location constraints, instrument’s portability and usability, surface characteristics, working team experience, project’s budget, final goal, etc. Although aware of the potentialities of the image-based approach and its recent developments in automated and dense image matching for non-expert the easy usability and reliability of optical active sensors in acquiring 3D data is generally a good motivation to decline image-based approaches. Moreover the great advantage of active sensors is the fact that they deliver immediately dense and detailed 3D point clouds, whose coordinate are metrically defined. On the other hand image data require some processing and a mathematical formulation to transform the two-dimensional image measurements into metric three-dimensional coordinates. Image-based modelling techniques (mainly photogrammetry and computer vision) are generally preferred in cases of monuments or architectures with regular geometric shapes, low budget projects, good experience of the working team, time or location constraints for the data acquisition and processing. This chapter is intended as an updated review of reality-based 3D modelling in terrestrial applications, with the different categories of 3D sensing devices and the related data processing pipelines

    Computer vision reading on stickers and direct part marking on horticultural products : challenges and possible solutions

    Get PDF
    Traceability of products from production to the consumer has led to a technological advancement in product identification. There has been development from the use of traditional one-dimensional barcodes (EAN-13, Code 128, etc.) to 2D (two-dimensional) barcodes such as QR (Quick Response) and Data Matrix codes. Over the last two decades there has been an increased use of Radio Frequency Identification (RFID) and Direct Part Marking (DPM) using lasers for product identification in agriculture. However, in agriculture there are still considerable challenges to adopting barcodes, RFID and DPM technologies, unlike in industry where these technologies have been very successful. This study was divided into three main objectives. Firstly, determination of the effect of speed, dirt, moisture and bar width on barcode detection was carried out both in the laboratory and a flower producing company, Brandkamp GmbH. This study developed algorithms for automation and detection of Code 128 barcodes under rough production conditions. Secondly, investigations were carried out on the effect of low laser marking energy on barcode size, print growth, colour and contrast on decoding 2D Data Matrix codes printed directly on apples. Three different apple varieties (Golden Delicious, Kanzi and Red Jonaprince) were marked with various levels of energy and different barcode sizes. Image processing using Halcon 11.0.1 (MvTec) was used to evaluate the markings on the apples. Finally, the third objective was to evaluate both algorithms for 1D and 2D barcodes. According to the results, increasing the speed and angle of inclination of the barcode decreased barcode recognition. Also, increasing the dirt on the surface of the barcode resulted in decreasing the successful detection of those barcodes. However, there was 100% detection of the Code 128 barcode at the company’s production speed (0.15 m/s) with the proposed algorithm. Overall, the results from the company showed that the image-based system has a future prospect for automation in horticultural production systems. It overcomes the problem of using laser barcode readers. The results for apples showed that laser energy, barcode size, print growth, type of product, contrast between the markings and the colour of the products, the inertia of the laser system and the days of storage all singularly or in combination with each other influence the readability of laser Data Matrix codes and implementation on apples. There was poor detection of the Data Matrix code on Kanzi and Red Jonaprince due to the poor contrast between the markings on their skins. The proposed algorithm is currently working successfully on Golden Delicious with 100% detection for 10 days using energy 0.108 J mm-2 and a barcode size of 10 × 10 mm2. This shows that there is a future prospect of not only marking barcodes on apples but also on other agricultural products for real time production

    Development of a laser scanning system for the inspection of surface defects

    Get PDF
    The main objective of this project is the development of a low cost laser scanning system for the detection of surface defects The inspection system is based on a laser sensor, power supply unit and an X-Y table All of these components are interfaced with a personal computer The system developed is capable of measuring surface defects within the range of greater than or equal to 1mm For the experiments carried out it was intended to assess the sensors measuring ability under certain headings The two main headings covered were accuracy and repeatability Other important subjects included examining the measuring limits of the system, examining the effect of using materials of different reflection abilities, and the possible cause of errors in the system

    Integrating remote sensing datasets into ecological modelling: a Bayesian approach

    Get PDF
    Process-based models have been used to simulate 3-dimensional complexities of forest ecosystems and their temporal changes, but their extensive data requirement and complex parameterisation have often limited their use for practical management applications. Increasingly, information retrieved using remote sensing techniques can help in model parameterisation and data collection by providing spatially and temporally resolved forest information. In this paper, we illustrate the potential of Bayesian calibration for integrating such data sources to simulate forest production. As an example, we use the 3-PG model combined with hyperspectral, LiDAR, SAR and field-based data to simulate the growth of UK Corsican pine stands. Hyperspectral, LiDAR and SAR data are used to estimate LAI dynamics, tree height and above ground biomass, respectively, while the Bayesian calibration provides estimates of uncertainties to model parameters and outputs. The Bayesian calibration contrasts with goodness-of-fit approaches, which do not provide uncertainties to parameters and model outputs. Parameters and the data used in the calibration process are presented in the form of probability distributions, reflecting our degree of certainty about them. After the calibration, the distributions are updated. To approximate posterior distributions (of outputs and parameters), a Markov Chain Monte Carlo sampling approach is used (25 000 steps). A sensitivity analysis is also conducted between parameters and outputs. Overall, the results illustrate the potential of a Bayesian framework for truly integrative work, both in the consideration of field-based and remotely sensed datasets available and in estimating parameter and model output uncertainties

    Improvement of the Geospatial Accuracy of Mobile Terrestrial LiDAR Data

    Get PDF
    Many applications, such as topographic surveying for transportation engineering, have specific high accuracy requirements which MTL may be able to achieve under specific circumstances. Since high rate, immersive (360 FOV), MTL is a relatively new device for the collection and extraction of survey data; the understanding and correction of errors within such systems is under researched. Therefore, the goal of the work presented here is to quantify the geospatial accuracy of MTL data and improve the quality of MTL data products. Quantification of the geospatial accuracy of MTL systems was accomplished through the use of residual analysis, error propagation and conditional variance analysis. Real data from two MTL systems was analyzed using these methods and it was found that the actual errors exceeded the manufacturers estimates of system accuracy by over 10mm. Conditional variance analysis on these systems has shown that the contribution by the interactions among the measured parameters to the variances of the points in MTL point clouds is insignificant. The sizes of the variances for the measurements used to produce a point are the primary sources of error in the output point cloud. Improvement of the geospatial accuracy of MTL data products was accomplished by developing methods for the simultaneous multi-sensor calibration of the systems boresight angles and lever arm offsets, zero error calibration, temperature correction, and both spatial and temporal outlier detection. Evaluation of the effectiveness of these techniques was accomplished through the use of two test cases, employing real MTL data. Test case 1 showed that the residuals between a control field and the MTL point cloud were reduced by 4.4cm for points located on both horizontal and vertical target surfaces. Similarly, test case 2 showed a reduction in the residuals between control points and MTL data of 2~3cm on horizontal surfaces and 1~2cm on vertical surfaces. The most accurate point cloud produced through the use of these calibration and filtering techniques occurred in test case 1 (27mm 26mm). This result is still not accurate enough for certain high accuracy applications such as topographic surveying for transportation engineering (20mm 10mm)
    corecore