2,128 research outputs found

    Reduced basis method for source mask optimization

    Full text link
    Image modeling and simulation are critical to extending the limits of leading edge lithography technologies used for IC making. Simultaneous source mask optimization (SMO) has become an important objective in the field of computational lithography. SMO is considered essential to extending immersion lithography beyond the 45nm node. However, SMO is computationally extremely challenging and time-consuming. The key challenges are due to run time vs. accuracy tradeoffs of the imaging models used for the computational lithography. We present a new technique to be incorporated in the SMO flow. This new approach is based on the reduced basis method (RBM) applied to the simulation of light transmission through the lithography masks. It provides a rigorous approximation to the exact lithographical problem, based on fully vectorial Maxwell's equations. Using the reduced basis method, the optimization process is divided into an offline and an online steps. In the offline step, a RBM model with variable geometrical parameters is built self-adaptively and using a Finite Element (FEM) based solver. In the online step, the RBM model can be solved very fast for arbitrary illumination and geometrical parameters, such as dimensions of OPC features, line widths, etc. This approach dramatically reduces computational costs of the optimization procedure while providing accuracy superior to the approaches involving simplified mask models. RBM furthermore provides rigorous error estimators, which assure the quality and reliability of the reduced basis solutions. We apply the reduced basis method to a 3D SMO example. We quantify performance, computational costs and accuracy of our method.Comment: BACUS Photomask Technology 201

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Sabanci-Okan system at ImageClef 2011: plant identication task

    Get PDF
    We describe our participation in the plant identication task of ImageClef 2011. Our approach employs a variety of texture, shape as well as color descriptors. Due to the morphometric properties of plants, mathematical morphology has been advocated as the main methodology for texture characterization, supported by a multitude of contour-based shape and color features. We submitted a single run, where the focus has been almost exclusively on scan and scan-like images, due primarily to lack of time. Moreover, special care has been taken to obtain a fully automatic system, operating only on image data. While our photo results are low, we consider our submission successful, since besides being our rst attempt, our accuracy is the highest when considering the average of the scan and scan-like results, upon which we had concentrated our eorts

    Ranking Functions for Size-Change Termination II

    Full text link
    Size-Change Termination is an increasingly-popular technique for verifying program termination. These termination proofs are deduced from an abstract representation of the program in the form of "size-change graphs". We present algorithms that, for certain classes of size-change graphs, deduce a global ranking function: an expression that ranks program states, and decreases on every transition. A ranking function serves as a witness for a termination proof, and is therefore interesting for program certification. The particular form of the ranking expressions that represent SCT termination proofs sheds light on the scope of the proof method. The complexity of the expressions is also interesting, both practicaly and theoretically. While deducing ranking functions from size-change graphs has already been shown possible, the constructions in this paper are simpler and more transparent than previously known. They improve the upper bound on the size of the ranking expression from triply exponential down to singly exponential (for certain classes of instances). We claim that this result is, in some sense, optimal. To this end, we introduce a framework for lower bounds on the complexity of ranking expressions and prove exponential lower bounds.Comment: 29 pages

    Local feature weighting in nearest prototype classification

    Get PDF
    The distance metric is the corner stone of nearest neighbor (NN)-based methods, and therefore, of nearest prototype (NP) algorithms. That is because they classify depending on the similarity of the data. When the data is characterized by a set of features which may contribute to the classification task in different levels, feature weighting or selection is required, sometimes in a local sense. However, local weighting is typically restricted to NN approaches. In this paper, we introduce local feature weighting (LFW) in NP classification. LFW provides each prototype its own weight vector, opposite to typical global weighting methods found in the NP literature, where all the prototypes share the same one. Providing each prototype its own weight vector has a novel effect in the borders of the Voronoi regions generated: They become nonlinear. We have integrated LFW with a previously developed evolutionary nearest prototype classifier (ENPC). The experiments performed both in artificial and real data sets demonstrate that the resulting algorithm that we call LFW in nearest prototype classification (LFW-NPC) avoids overfitting on training data in domains where the features may have different contribution to the classification task in different areas of the feature space. This generalization capability is also reflected in automatically obtaining an accurate and reduced set of prototypes.Publicad
    corecore