326,462 research outputs found

    Image retrieval and processing system version 2.0 development work

    Get PDF
    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each

    A Review on Personalized Tag based Image based Search Engines

    Get PDF
    The development of social media based on Web 2.0, amounts of images and videos spring up everywhere on the Internet. This phenomenon has brought great challenges to multimedia storage, indexing and retrieval. Generally speaking, tag-based image search is more commonly used in social media than content based image retrieval and content understanding. Thanks to the low relevance and diversity performance of initial retrieval results, the ranking problem in the tag-based image retrieval has gained researchers� wide attention. We will review some of techniques proposed by different authors for image retrieval in this paper

    Digital Photograph Album Software Review

    Get PDF
    Digital photography\u27s relatively low cost and easy use encourages educators to build Extension image collections, but image retrieval can become difficult as collections grow. Picasa 2, Adobe Photoshop Album 2.0, Corel Photo Album 6, and ACDSee 8 Photo Manager are four popular photo cataloging software products that function to import, view, sort, assign keywords to, and search for image files. This review synopsizes their functionality and efficiency. ACDSee seems to offer more tools than the other products, although Picasa 2 may be sufficient for smaller image collections

    Prosumer Behaviors in Brand Image Creation

    Get PDF
    A brand remains a considerable source of the competitive advantage. One of the elements contributing to its power is image. The information revolution and globalization make it necessary to search for new means of differentiating brands. One of them is engaging consumers in the brand creation process. In light of the development of the Web 2.0, prosumers – active consumers functioning both as consumers and partly as producers – can have a meaningful influence on the image of brands. Their activities can entail both positive as well as negative effects

    Understanding User Intentions in Vertical Image Search

    Get PDF
    With the development of Internet and Web 2.0, large volume of multimedia contents have been made online. It is highly desired to provide easy accessibility to such contents, i.e. efficient and precise retrieval of images that satisfies users' needs. Towards this goal, content-based image retrieval (CBIR) has been intensively studied in the research community, while text-based search is better adopted in the industry. Both approaches have inherent disadvantages and limitations. Therefore, unlike the great success of text search, Web image search engines are still premature. In this thesis, we present iLike, a vertical image search engine which integrates both textual and visual features to improve retrieval performance. We bridge the semantic gap by capturing the meaning of each text term in the visual feature space, and re-weight visual features according to their significance to the query terms. We also bridge the user intention gap since we are able to infer the "visual meanings" behind the textual queries. Last but not least, we provide a visual thesaurus, which is generated from the statistical similarity between the visual space representation of textual terms. Experimental results show that our approach improves both precision and recall, compared with content-based or text-based image retrieval techniques. More importantly, search results from iLike are more consistent with users' perception of the query terms

    Lens or Binary? Chandra Observations of the Wide Separation Broad Absorption Line Quasar Pair UM425

    Full text link
    We have obtained a 110 ksec Chandra ACIS-S exposure of UM425, a pair of QSOs at z=1.47 separated by 6.5 arcsec, which show remarkably similar emission and broad absorption line (BAL) profiles in the optical/UV. Our 5000 count X-ray spectrum of UM425A (the brighter component) is well-fit with a power law (photon spectral index Gamma=2.0) partially covered by a hydrogen column of 3.8x10^22 cm^-2. The underlying power-law slope for this object and for other recent samples of BALQSOs is typical of radio-quiet quasars, lending credence to the hypothesis that BALs exist in every quasar. Assuming the same Gamma for the much fainter image of UM425B, we detect an obscuring column 5 times larger. We search for evidence of an appropriately large lensing mass in our Chandra image and find weak diffuse emission near the quasar pair, with an X-ray flux typical of a group of galaxies at redshift z ~ 0.6. From our analysis of archival HST WFPC2 and NICMOS images, we find no evidence for a luminous lensing galaxy, but note a 3-sigma excess of galaxies in the UM425 field with plausible magnitudes for a z=0.6 galaxy group. However, the associated X-ray emission does not imply sufficient mass to produce the observed image splitting. The lens scenario thus requires a dark (high M/L ratio) lens, or a fortuitous configuration of masses along the line of sight. UM425 may instead be a close binary pair of BALQSOs, which would boost arguments that interactions and mergers increase nuclear activity and outflows.Comment: 13 pages, 9 figures, Accepted for publication in the Astrophysical Journa

    Synote: weaving media fragments and linked data

    No full text
    While end users could easily share and tag the multimedia resources online, the searching and reusing of the inside content of multimedia, such as a certain area within an image or a ten minutes segment within a one-hour video, is still difficult. Linked data is a promising way to interlink media fragments with other resources. Many applications in Web 2.0 have generated large amount of external annotations linked to media fragments. In this paper, we use Synote as the target application to discuss how media fragments could be published together with external annotations following linked data principles. Our design solves the dereferencing, describing and interlinking methods problems in interlinking multimedia. We also implement a model to let Google index media fragments which improves media fragments' online presence. The evaluation shows that our design can successfully publish media fragments and annotations for both semantic Web agents and traditional search engines. Publishing media fragments using the design we describe in this paper will lead to better indexing of multimedia resources and their consequent findabilit

    Multi-Objective Evolutionary for Object Detection Mobile Architectures Search

    Full text link
    Recently, Neural architecture search has achieved great success on classification tasks for mobile devices. The backbone network for object detection is usually obtained on the image classification task. However, the architecture which is searched through the classification task is sub-optimal because of the gap between the task of image and object detection. As while work focuses on backbone network architecture search for mobile device object detection is limited, mainly because the backbone always requires expensive ImageNet pre-training. Accordingly, it is necessary to study the approach of network architecture search for mobile device object detection without expensive pre-training. In this work, we propose a mobile object detection backbone network architecture search algorithm which is a kind of evolutionary optimized method based on non-dominated sorting for NAS scenarios. It can quickly search to obtain the backbone network architecture within certain constraints. It better solves the problem of suboptimal linear combination accuracy and computational cost. The proposed approach can search the backbone networks with different depths, widths, or expansion sizes via a technique of weight mapping, making it possible to use NAS for mobile devices detection tasks a lot more efficiently. In our experiments, we verify the effectiveness of the proposed approach on YoloX-Lite, a lightweight version of the target detection framework. Under similar computational complexity, the accuracy of the backbone network architecture we search for is 2.0% mAP higher than MobileDet. Our improved backbone network can reduce the computational effort while improving the accuracy of the object detection network. To prove its effectiveness, a series of ablation studies have been carried out and the working mechanism has been analyzed in detail

    User experiments with the Eurovision cross-language image retrieval system

    Get PDF
    In this paper we present Eurovision, a text-based system for cross-language (CL) image retrieval. The system is evaluated by multilingual users for two search tasks with the system configured in English and five other languages. To our knowledge this is the first published set of user experiments for CL image retrieval. We show that: (1) it is possible to create a usable multilingual search engine using little knowledge of any language other than English, (2) categorizing images assists the user's search, and (3) there are differences in the way users search between the proposed search tasks. Based on the two search tasks and user feedback, we describe important aspects of any CL image retrieval system
    • …
    corecore