34 research outputs found

    Modeling and analysis of extrusion-spin coating : an efficient and deterministic photoresist coating method in microlithography

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.Includes bibliographical references (p. 173-178).In the fabrication of microelectronic chips, microlithography is used to transfer a pattern of circuit geometry from mask to semiconductor wafer. An important step in this process is the deposition of a thin and uniform layer of photoresist (often called resist) on which the lithographic image is exposed. Typical photoresist layers are less than 1 pum thick with a variation of 5 [angstroms] for advanced chips. Spin coating is the prevalent coating method to produce the required thickness and uniformity, but it typically wastes over 90% of the photoresist applied. A more efficient method needs to be developed for two reasons. The first is that 80% of the photoresist is an environmentally hazardous solvent. The second is the cost increase of photoresist. As the target of semiconductor industry moves toward the fabrication of smaller devices with larger capacity, the trend in photoresist shifts from i-line to deep UV resists, which allow for narrower linewidths on a chip. The price of this new resist is four to ten times higher than that of i-line resists. Reducing photoresist waste is desirable for both environmental and economical reasons. The current spin coating method has another problem in addition to low coating efficiency. Results from spin coating are unpredictable. The relationships between the inputs (process variables) and outputs (coating thickness and uniformity) can only be obtained by trial and error. Thus, a number of experiments have to be conducted to attain a certain coating thickness and uniformity. A more effective method would yield the predictable coating thicknesses and uniformities for given inputs.(cont.) Both the cost and time required for process development can be reduced this way. Extrusion-spin coating achieves high coating efficiency with predictable coating results. This new method uses an efficient extrusion coating technique to apply a thin film of resist to a wafer before spinning. spinning. This initial layer of photoresist eliminates the spreading phase, the most inefficient step of spin coating. The initial layer also provides the existing spin coating models with determined initial conditions and thereby renders its results predictable. A prototype extrusion-spin coater has been designed and fabricated. Initial experiments have been conducted to determine, test and optimize process variables. One variable, the solvent concentration degree in the environment, is most critical. As the initial coating layer deposited by extrusion coating is only 20-40 [mu]m, solvent contained in the photoresist evaporates rapidly at the absence of a solvent concentration in the environment. Evaporation causes the viscosity of photoresist to be nonuniform over the wafer. The outcome of the spin coating process becomes less uniform. Experimental results are compared with Emslie et al.'s predictive models of spin coating. A solvent concentration of 80% or higher in the environment was found to be necessary to attain a predictable coating thickness with 5 [angstrom] uniformity. With optimized process variables, mean coating thickness matches theoretical predictions with a variation of 0.01 [mu]m. Defect-free coating results with coating efficiencies as high as 40% were achieved.by Sangjun Han.Ph.D

    Optimization of process variables in extrusion-spin coating

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1997.Includes bibliographical references (p. 83-84).by Sangjun Han.M.S

    Feasibility of software-based assessment for automated evaluation of tooth preparation for dental crown by using a computational geometric algorithm.

    Get PDF
    The purpose of this study was to propose the concept of software-based automated evaluation (SAE) of tooth preparation quality using computational geometric algorithms, and evaluate the feasibility of SAE in the assessment of abutment tooth preparation for single-unit anatomic contour crowns by comparing it with a human-based digitally assisted evaluation (DAE) by trained human evaluators. Thirty-five mandibular first molars were prepared for anatomical contour crown restoration by graduate students. Each prepared tooth was digitized and evaluated in terms of occlusal reduction and total occlusal convergence using SAE and DAE. Intra-rater agreement for the scores graded by the SAE and DAE and inter-rater agreement between the SAE and DAE were analyzed with the significance level (α) of 0.05. The evaluation using the SAE protocol demonstrated perfect intra-rater agreement, whereas the evaluation using the DAE protocol showed moderate-to-good intra-rater agreement. The evaluation values of the SAE and DAE protocols showed almost perfect inter-rater agreement. The SAE developed for tooth preparation evaluation can be used for dental education and clinical skill feedback. SAE may minimize possible errors in the conventional rating and provide more reliable and precise assessments than the human-based DAE

    Properties of Central Caustics in Planetary Microlensing

    Full text link
    To maximize the number of planet detections, current microlensing follow-up observations are focusing on high-magnification events which have a higher chance of being perturbed by central caustics. In this paper, we investigate the properties of central caustics and the perturbations induced by them. We derive analytic expressions of the location, size, and shape of the central caustic as a function of the star-planet separation, ss, and the planet/star mass ratio, qq, under the planetary perturbative approximation and compare the results with those based on numerical computations. While it has been known that the size of the planetary caustic is \propto \sqrt{q}, we find from this work that the dependence of the size of the central caustic on qq is linear, i.e., \propto q, implying that the central caustic shrinks much more rapidly with the decrease of qq compared to the planetary caustic. The central-caustic size depends also on the star-planet separation. If the size of the caustic is defined as the separation between the two cusps on the star-planet axis (horizontal width), we find that the dependence of the central-caustic size on the separation is \propto (s+1/s). While the size of the central caustic depends both on ss and q, its shape defined as the vertical/horizontal width ratio, R_c, is solely dependent on the planetary separation and we derive an analytic relation between R_c and s. Due to the smaller size of the central caustic combined with much more rapid decrease of its size with the decrease of q, the effect of finite source size on the perturbation induced by the central caustic is much more severe than the effect on the perturbation induced by the planetary caustic. Abridged.Comment: 5 pages, 4 figures, ApJ accepte

    Can We Utilize Pre-trained Language Models within Causal Discovery Algorithms?

    Full text link
    Scaling laws have allowed Pre-trained Language Models (PLMs) into the field of causal reasoning. Causal reasoning of PLM relies solely on text-based descriptions, in contrast to causal discovery which aims to determine the causal relationships between variables utilizing data. Recently, there has been current research regarding a method that mimics causal discovery by aggregating the outcomes of repetitive causal reasoning, achieved through specifically designed prompts. It highlights the usefulness of PLMs in discovering cause and effect, which is often limited by a lack of data, especially when dealing with multiple variables. Conversely, the characteristics of PLMs which are that PLMs do not analyze data and they are highly dependent on prompt design leads to a crucial limitation for directly using PLMs in causal discovery. Accordingly, PLM-based causal reasoning deeply depends on the prompt design and carries out the risk of overconfidence and false predictions in determining causal relationships. In this paper, we empirically demonstrate the aforementioned limitations of PLM-based causal reasoning through experiments on physics-inspired synthetic data. Then, we propose a new framework that integrates prior knowledge obtained from PLM with a causal discovery algorithm. This is accomplished by initializing an adjacency matrix for causal discovery and incorporating regularization using prior knowledge. Our proposed framework not only demonstrates improved performance through the integration of PLM and causal discovery but also suggests how to leverage PLM-extracted prior knowledge with existing causal discovery algorithms

    Gravitational Microlensing: A Tool for Detecting and Characterizing Free-Floating Planets

    Full text link
    Various methods have been proposed to search for extrasolar planets. Compared to the other methods, microlensing has unique applicabilities to the detections of Earth-mass and free-floating planets. However, the microlensing method is seriously flawed by the fact that the masses of the detected planets cannot be uniquely determined. Recently, Gould, Gaudi, & Han introduced an observational setup that enables one to resolve the mass degeneracy of the Earth-mass planets. The setup requires a modest adjustment to the orbit of an already proposed Microlensing planet-finder satellite combined with ground-based observations. In this paper, we show that a similar observational setup can also be used for the mass determinations of free-floating planets with masses ranging from ~0.1 M_J to several Jupiter masses. If the proposed observational setup is realized, the future lensing surveys will play important roles in the studies of Earth-mass and free-floating planets, which are the populations of planets that have not been previously probed.Comment: total 8 pages, including 3 figures, ApJ, in press (Mar 1, 2004

    Performance Analysis of Log Extraction by a Small Shovel Operation in Steep Forests of South Korea

    No full text
    In South Korea, logs for low-value products, such as pulpwood and fuelwood, are primarily extracted from harvest sites and transported to roadside or landing areas using small shovels. Previous studies on log extraction, however, have focused on cable yarding operations with the goal of improving productivity on steep slopes and inaccessible sites, leaving small-shovel operations relatively unexamined. Therefore, the main objectives were to determine small-shovel extraction productivity and costs and to evaluate the impact of related variables on productivity. In addition, we developed a model to estimate productivity under various site conditions. The study took place in 30 case study areas; each area has trees with stems at a diameter at breast height ranging from 18 to 32 cm and a steep slope (greater than 15%). The areas ranged from 241 to 1129 trees per hectare, with conifer, deciduous, and mixed stands. Small-shovel drives ranged from 36 to 72 m per extraction cycle from stump to landing. The results indicated that the mean extraction productivity of small-shovel operations ranged between 2.44 to 9.85 m3 per scheduled machine hour (including all delays). At the forest level, the estimated average stump-to-forest road log production costs were US $4.37 to 17.66/m3. Small-shovel productivity was significantly correlated with stem size (diameter at breast height and tree volume) and total travelled distance (TTD). However, a Pearson’s correlation analysis indicated that stand density and slope did not have a significant effect on productivity. Our findings provide insights into how stem size and TTD influence small shovel performance and the predictive ability of productivity. Further, this information may be a valuable asset to forest planners and managers

    Productivity and cost of a small-scale cable yarder in an uphill and downhill area: a case study in South Korea

    No full text
    Tree diameter, topography, and stand accessibility have been major factors to consider when selecting the optimal equipment to extract logs from stump to landing area. In Korea, forest land has 6.4 million ha of forest, comprising 64% of its total land area. Small and medium (15–30 cm in diameter at breast height [DBH]) size of trees located on steep slopes (> 30°) is approximately 80% of total forest area. Therefore, there has been an increasing interest in the application of a small-scale cable yarding system. We performed uphill and downhill yarding experiments using an 80 hp farm tractor mounted tower yarder (HAM300) to evaluate productivities and costs associated with primary transportation of tree length logs in mixed conifer stands. In addition, sensitivity analyses were performed to find the effects of different yarding directions and distances on yarding productivities and costs. Results showed that uphill and downhill yarding productivities were 9.04 m3/PMH (Productivity Machine Hours) and 7.87 m3/PMH at a cost of US9.06andUS9.06 and US10.04/m3, respectively. The yarding direction greatly affected productivity and cost: decreasing productivity may be significantly affected by working conditions. Our results support the effectiveness of an HAM300 yarder in extracting logs for small-scale cable yarding operations

    Super-Resolution for Improving EEG Spatial Resolution using Deep Convolutional Neural Network—Feasibility Study

    No full text
    Electroencephalography (EEG) has relatively poor spatial resolution and may yield incorrect brain dynamics and distort topography; thus, high-density EEG systems are necessary for better analysis. Conventional methods have been proposed to solve these problems, however, they depend on parameters or brain models that are not simple to address. Therefore, new approaches are necessary to enhance EEG spatial resolution while maintaining its data properties. In this work, we investigated the super-resolution (SR) technique using deep convolutional neural networks (CNN) with simulated EEG data with white Gaussian and real brain noises, and experimental EEG data obtained during an auditory evoked potential task. SR EEG simulated data with white Gaussian noise or brain noise demonstrated a lower mean squared error and higher correlations with sensor information, and detected sources even more clearly than did low resolution (LR) EEG. In addition, experimental SR data also demonstrated far smaller errors for N1 and P2 components, and yielded reasonable localized sources, while LR data did not. We verified our proposed approach’s feasibility and efficacy, and conclude that it may be possible to explore various brain dynamics even with a small number of sensors
    corecore