3,575 research outputs found

    Parallel processing for digital picture comparison

    Get PDF
    In picture processing an important problem is to identify two digital pictures of the same scene taken under different lighting conditions. This kind of problem can be found in remote sensing, satellite signal processing and the related areas. The identification can be done by transforming the gray levels so that the gray level histograms of the two pictures are closely matched. The transformation problem can be solved by using the packing method. Researchers propose a VLSI architecture consisting of m x n processing elements with extensive parallel and pipelining computation capabilities to speed up the transformation with the time complexity 0(max(m,n)), where m and n are the numbers of the gray levels of the input picture and the reference picture respectively. If using uniprocessor and a dynamic programming algorithm, the time complexity will be 0(m(3)xn). The algorithm partition problem, as an important issue in VLSI design, is discussed. Verification of the proposed architecture is also given

    Studying Micro-Processes in Software Development Stream

    Get PDF
    In this paper we propose a new streaming technique to study software development. As we observed software development consists of a series of activities such as edit, compilation, testing, debug and deployment etc. All these activities contribute to development stream, which is a collection of software development activities in time order. Development stream can help us replay and reveal software development process at a later time without too much hassle. We developed a system called Zorro to generate and analyze development stream at Collaborative Software Development Laboratory in University of Hawaii. It is built on the top of Hackystat, an in-process automatic metric collection system developed in the CSDL. Hackystat sensors continuously collect development activities and send them to a centralized data store for processing. Zorro reads in all data of a project and constructs stream from them. Tokenizers are chained together to divide development stream into episodes (micro iteration) for classification with rule engine. In this paper we demonstrate the analysis on Test-Driven Development (TDD) with this framework

    Probing New Physics using top quark polarization in the e+e- -> t \bar{t} process at future Linear Colliders

    Full text link
    We investigate the sensitivity to new physics of the process e+e- -> t bar{t} when the top polarization is analyzed using leptonic final states e+e- -> t \bar{t} -> l+l- b \bar{b} nu_l \bar{nu}_l. We first show that the kinematical reconstruction of the complete kinematics is experimentally tractable for this process. Then we apply the matrix element method to study the sensitivity to the Vt\bar{t} coupling (V being a vector gauge boson), at the tree level and in the narrow width approximation. Assuming the ILC baseline configuration, sqrt{S}=500 GeV, and a luminosity of 500 fb^{-1}, we conclude that this optimal analysis allows to determine simultaneously the ten form factors that parameterize the Vt\bar{t} coupling, below the percent level. We also discuss the effects of the next leading order (NLO) electroweak corrections using the GRACE program with polarized beams. It is found that the NLO corrections to different beam polarization lead to significantly different patterns of contributions.Comment: 14 pages, 4 figures, Proceedings for the TYL-FJPPL workshops on "Top Physics at ILC
    corecore