114 research outputs found

    A process for scheduling urban interchange reconstruction

    Get PDF
    The researchers from Iowa State University worked with the Iowa DOT, and developed computer-based schedules for the corridor I235 using Microsoft Project 2000, to predict project completion, expose and adjust conflicts between trades or subcontractors, evaluate the effect of changes on project completion and cost, track projects\u27 progress, and so on. This thesis will describe a method to create schedules for urban freeway interchange reconstruction projects and procedures and tables that assist with planning and accelerating. The use of the above is demonstrated in a case study: Martin Luther King Jr. (MLK) and Cottage Grove Avenue projects in the corridor I23

    Comparative Life Cycle Assessment of Single-Serve Coffee Packaging in Ontario

    Get PDF
    Single-serve coffee pods are occupying a growing share in the coffee market. In Ontario, with 14 million people, it is estimated that 2 billion single-serve coffee pods are consumed annually, the consumption of which generates 30,000 tons of landfill waste in Ontario, equivalent to 0.3% of total landfill waste generated in the province in 2014. Different formats of coffee pods have been introduced, and each addresses the waste problem differently. Two examples are recyclable coffee pods made of aluminum and compostable coffee pods made from biodegradable polymers. In this research, these two coffee pod formats are investigated together with a typical petroleum-based plastic coffee pod, which represents the baseline landfilling scenario. A cradle-to-grave life cycle assessment (LCA) is conducted to quantify and compare the environmental effects of these systems, with a special focus on packaging materials and end-of-life management. The results show that among the three investigated coffee pods, the recyclable aluminum format has the highest potential environmental effects across nine impact categories. Whereas, the Biodegradable Pod, which is assumed to be composted in 40% of uses, has reduced greenhouse gas emissions and landfill waste generation potential when compared with the petroleum-based plastic coffee pod. After applying a standard LCA weighting, results indicate that human toxicity is the most important life cycle impact assessment indicator result associated with all three of coffee pod formats. This research is important from both a biodegradable material and a circular economy perspective. From a biodegradable material perspective, this study is the first to compare polylactic acid, a bio-based biodegradable polymer, with polystyrene, a petroleum-based non-degradable plastic. Biodegradable materials enable consumers easily to compost the coffee waste together with the coffee pod, but at the same time, it requires an extra plastic packaging warp for each coffee pod. From a circular economy perspective, the study is important because the results indicate the strength of using compostable biological nutrients over recyclable technical nutrients in the context of small single-use food products. Like all LCA studies, the results are dependent on specific assumptions and scenarios analyzed

    Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue

    Full text link
    Histological staining is a vital step used to diagnose various diseases and has been used for more than a century to provide contrast to tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time-consuming, labor-intensive, expensive and destructive to the specimen. Recently, the ability to virtually-stain unlabeled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain specific deep neural networks. Here, we present a new deep learning-based framework which generates virtually-stained images using label-free tissue, where different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information at its input: (1) autofluorescence images of the label-free tissue sample, and (2) a digital staining matrix which represents the desired microscopic map of different stains to be virtually generated at the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabeled kidney tissue sections to generate micro-structured combinations of Hematoxylin and Eosin (H&E), Jones silver stain, and Masson's Trichrome stain. Using a single network, this approach multiplexes virtual staining of label-free tissue with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created on the same tissue cross-section, which is currently not feasible with standard histochemical staining methods.Comment: 19 pages, 5 figures, 2 table

    Ensemble learning of diffractive optical networks

    Full text link
    A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware, due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive Deep Neural Networks (D2NNs) form such an optical computing framework, which benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. D2NNs have demonstrated success in various tasks, including e.g., object classification, spectral-encoding of information, optical pulse shaping and imaging, among others. Here, we significantly improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training a total of 1252 D2NNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improve their image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N=14 and N=30 D2NNs achieve blind testing accuracies of 61.14% and 62.13%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual D2NNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leapfrog to extend the application space of diffractive optical image classification and machine vision systems.Comment: 22 Pages, 4 Figures, 1 Tabl

    Universal Linear Intensity Transformations Using Spatially-Incoherent Diffractive Processors

    Full text link
    Under spatially-coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is greater than or equal to ~2 Ni x No, where Ni and No refer to the number of useful pixels at the input and the output FOVs, respectively. Here we report the design of a spatially-incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs. Under spatially-incoherent monochromatic light, the spatially-varying intensity point spread functon(H) of a diffractive network, corresponding to a given, arbitrarily-selected linear intensity transformation, can be written as H(m,n;m',n')=|h(m,n;m',n')|^2, where h is the spatially-coherent point-spread function of the same diffractive network, and (m,n) and (m',n') define the coordinates of the output and input FOVs, respectively. Using deep learning, supervised through examples of input-output profiles, we numerically demonstrate that a spatially-incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N is greater than or equal to ~2 Ni x No. These results constitute the first demonstration of universal linear intensity transformations performed on an input FOV under spatially-incoherent illumination and will be useful for designing all-optical visual processors that can work with incoherent, natural light.Comment: 29 Pages, 10 Figure

    Pyramid diffractive optical networks for unidirectional magnification and demagnification

    Full text link
    Diffractive deep neural networks (D2NNs) are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view (FOV). Here, we present a pyramid-structured diffractive optical network design (which we term P-D2NN), optimized specifically for unidirectional image magnification and demagnification. In this P-D2NN design, the diffractive layers are pyramidally scaled in alignment with the direction of the image magnification or demagnification. Our analyses revealed the efficacy of this P-D2NN design in unidirectional image magnification and demagnification tasks, producing high-fidelity magnified or demagnified images in only one direction, while inhibiting the image formation in the opposite direction - confirming the desired unidirectional imaging operation. Compared to the conventional D2NN designs with uniform-sized successive diffractive layers, P-D2NN design achieves similar performance in unidirectional magnification tasks using only half of the diffractive degrees of freedom within the optical processor volume. Furthermore, it maintains its unidirectional image magnification/demagnification functionality across a large band of illumination wavelengths despite being trained with a single illumination wavelength. With this pyramidal architecture, we also designed a wavelength-multiplexed diffractive network, where a unidirectional magnifier and a unidirectional demagnifier operate simultaneously in opposite directions, at two distinct illumination wavelengths. The efficacy of the P-D2NN architecture was also validated experimentally using monochromatic terahertz illumination, successfully matching our numerical simulations. P-D2NN offers a physics-inspired strategy for designing task-specific visual processors.Comment: 26 Pages, 7 Figure

    Learning Inter- and Intra-frame Representations for Non-Lambertian Photometric Stereo

    Full text link
    In this paper, we build a two-stage Convolutional Neural Network (CNN) architecture to construct inter- and intra-frame representations based on an arbitrary number of images captured under different light directions, performing accurate normal estimation of non-Lambertian objects. We experimentally investigate numerous network design alternatives for identifying the optimal scheme to deploy inter-frame and intra-frame feature extraction modules for the photometric stereo problem. Moreover, we propose to utilize the easily obtained object mask for eliminating adverse interference from invalid background regions in intra-frame spatial convolutions, thus effectively improve the accuracy of normal estimation for surfaces made of dark materials or with cast shadows. Experimental results demonstrate that proposed masked two-stage photometric stereo CNN model (MT-PS-CNN) performs favorably against state-of-the-art photometric stereo techniques in terms of both accuracy and efficiency. In addition, the proposed method is capable of predicting accurate and rich surface normal details for non-Lambertian objects of complex geometry and performs stably given inputs captured in both sparse and dense lighting distributions.Comment: 9 pages,8 figure

    Rapid Sensing of Hidden Objects and Defects using a Single-Pixel Diffractive Terahertz Processor

    Full text link
    Terahertz waves offer numerous advantages for the nondestructive detection of hidden objects/defects in materials, as they can penetrate through most optically-opaque materials. However, existing terahertz inspection systems are restricted in their throughput and accuracy (especially for detecting small features) due to their limited speed and resolution. Furthermore, machine vision-based continuous sensing systems that use large-pixel-count imaging are generally bottlenecked due to their digital storage, data transmission and image processing requirements. Here, we report a diffractive processor that rapidly detects hidden defects/objects within a target sample using a single-pixel spectroscopic terahertz detector, without scanning the sample or forming/processing its image. This terahertz processor consists of passive diffractive layers that are optimized using deep learning to modify the spectrum of the terahertz radiation according to the absence/presence of hidden structures or defects. After its fabrication, the resulting diffractive processor all-optically probes the structural information of the sample volume and outputs a spectrum that directly indicates the presence or absence of hidden structures, not visible from outside. As a proof-of-concept, we trained a diffractive terahertz processor to sense hidden defects (including subwavelength features) inside test samples, and evaluated its performance by analyzing the detection sensitivity as a function of the size and position of the unknown defects. We validated its feasibility using a single-pixel terahertz time-domain spectroscopy setup and 3D-printed diffractive layers, successfully detecting hidden defects using pulsed terahertz illumination. This technique will be valuable for various applications, e.g., security screening, biomedical sensing, quality control, anti-counterfeiting measures and cultural heritage protection.Comment: 23 Pages, 5 Figure
    corecore