137 research outputs found

    Zero-Shot Hashing via Transferring Supervised Knowledge

    Full text link
    Hashing has shown its efficiency and effectiveness in facilitating large-scale multimedia applications. Supervised knowledge e.g. semantic labels or pair-wise relationship) associated to data is capable of significantly improving the quality of hash codes and hash functions. However, confronted with the rapid growth of newly-emerging concepts and multimedia data on the Web, existing supervised hashing approaches may easily suffer from the scarcity and validity of supervised information due to the expensive cost of manual labelling. In this paper, we propose a novel hashing scheme, termed \emph{zero-shot hashing} (ZSH), which compresses images of "unseen" categories to binary codes with hash functions learned from limited training data of "seen" categories. Specifically, we project independent data labels i.e. 0/1-form label vectors) into semantic embedding space, where semantic relationships among all the labels can be precisely characterized and thus seen supervised knowledge can be transferred to unseen classes. Moreover, in order to cope with the semantic shift problem, we rotate the embedded space to more suitably align the embedded semantics with the low-level visual feature space, thereby alleviating the influence of semantic gap. In the meantime, to exert positive effects on learning high-quality hash functions, we further propose to preserve local structural property and discrete nature in binary codes. Besides, we develop an efficient alternating algorithm to solve the ZSH model. Extensive experiments conducted on various real-life datasets show the superior zero-shot image retrieval performance of ZSH as compared to several state-of-the-art hashing methods.Comment: 11 page

    A prosody-based vector-space model of dialog activity for information retrieval

    Get PDF
    Search in audio archives is a challenging problem. Using prosodic information to help find relevant content has been proposed as a complement to word-based retrieval, but its utility has been an open question. We propose a new way to use prosodic information in search, based on a vector-space model, where each point in time maps to a point in a vector space whose dimensions are derived from numerous prosodic features of the local context. Point pairs that are close in this vector space are frequently similar, not only in terms of the dialog activities, but also in topic. Using proximity in this space as an indicator of similarity, we built support for a query-by-example function. Searchers were happy to use this function, and it provided value on a large testset. Prosody-based retrieval did not perform as well as word-based retrieval, but the two sources of information were often non-redundant and in combination they sometimes performed better than either separately.We thank Martha Larson, Alejandro Vega, Steve Renals, Khiet Truong, Olac Fuentes, David Novick, Shreyas Karkhedkar, Luis F. Ramirez, Elizabeth E. Shriberg, Catharine Oertel, Louis-Philippe Morency, Tatsuya Kawahara, Mary Harper, and the anonymous reviewers. This work was supported in part by the National Science Foundation under Grants IIS-0914868 and IIS-1241434 and by the Spanish MEC under contract TIN2011-28169-C05-01.Ward, NG.; Werner, SD.; GarcĂ­a-Granada, F.; SanchĂ­s Arnal, E. (2015). A prosody-based vector-space model of dialog activity for information retrieval. Speech Communication. 68:85-96. doi:10.1016/j.specom.2015.01.004S85966

    Monte Carlo calculations of output factors for clinically shaped electron fields

    No full text
    the output factors for clinically relevant, irregularly shaped inserts as they intercept a linear accelerator’s electron beams. The output factor for a particular combination—energy, cone, insert, and source-to-surface distance (SSD)—is defined in accordance with AAPM TG-25 as the product of cone correction factor and insert correction factor, evaluated at the depth of maximum dose. Since cone correction factors are easily obtained, we focus our investigation on the insert correction factors (ICFs). An analysis of the inserts used in routine clinical practice resulted in the identification of a set of seven “idealized ” shapes characterized by specific parameters. The ICFs for these shapes were calculated using a Monte Carlo method (EGS4/BEAM) and measured for a subset of them using an ion chamber and well-established measurement methods. Analytical models were developed to predict the Monte Carlo–calculated ICF values for various electron energies, cone sizes, shapes, and SSDs. The goodness-of-fit between predicted and Monte Carlo– calculated ICF values was tested using the Kolmogorov–Smirnoff statistical test. Results show that Monte Carlo–calculated ICFs match the measured values within 2.0 % for most of the shapes considered, except for few highly elongated fields, where deviations up to 4.0 % were recorded. Predicted values based on analytical modeling agree with measured ICF values within 2 % to 3 % for all configurations. We conclude that the predicted ICF values based on modeling of Monte Carlo–calculated values could be introduced in clinical use
    • …
    corecore