140,647 research outputs found

    Digital libraries on an iPod: Beyond the client-server model

    Get PDF
    This paper describes an experimental system that enhanced an iPod with digital library capabilities. Using the open source digital library software Greenstone as a base, this paper more specifically maps out the technical steps necessary to achieve this, along with an account of our subsequent experimentation. This included command-line usage of Greenstone's basic runtime system on the device, augmenting the iPodā€™s main interactive menu-driven application to include searching and hierarchical browsing of digital library collections stored locally, and a selection of "launcher" applications for target documents such as text files, images and audio. Media rich applications for digital stories and collaging were also developed. We also configured the iPod to run as a web server to provide digital library content to others over a network, effectively turning the traditional mobile client-server upsidedown

    Perceptions of Usability and Usefulness in Digital Libraries

    Get PDF
    This paper provides an overview of a case study research that investigated the use of Digital Library (DL) resources in two undergraduate classes and explored faculty and studentsā€™ perceptions of educational digital libraries. This study found that students and faculty use academic DLs primarily for textual resources, but turn to the open Web for visual and multimedia resources. The study participants did not perceive academic libraries as a useful source of digital images and used search engines when searching for visual resources. The limited use of digital library resources for teaching and learning is associated with perceptions of usefulness and ease of use, especially if considered in a broader information landscape, in conjunction with other library information systems, and in the context of Web resources. The limited use of digital libraries is related to the following perceptions: 1) Library systems are not viewed as user-friendly, which in turn discourages potential users from trying DLs provided by academic libraries; 2) Academic libraries are perceived as places of primarily textual resources; perceptions of usefulness, especially in regard to relevance of content, coverage, and currency, seem to have a negative effect on user intention to use DLs, especially when searching for visual materials

    KIIT Digital Library: An open hypermedia Application

    No full text
    The massive use of Web technologies has spurred a new revolution in information storing and retrieving. It has always been an issue whether to incorporate hyperlinks embedded in a document or to store them separately in a link base. Research effort has been concentrated on the development of link services that enable hypermedia functionality to be integrate into the general computing environment and allow linking from all tools on the browser or desktop. KIIT digital library is such an application that focuses mainly on architecture and protocols of Open Hypermedia Systems (OHS), providing on-line document authoring, browsing, cataloguing, searching and updating features. The WWW needs fundamentally new frameworks and concepts to support new search and indexing functionality. This is because of the frequent use of digital archives and to maintain huge amount of database and documents. These digital materials range from electronic versions of books and journals offered by traditional publishers to manuscripts, photographs, maps, sound recordings and similar materials digitized from libraries' own special collections to new electronic scholarly and scientific databases developed through the collaboration of researchers, computer and information scientists, and librarians. Metadata in catalogue systems are an indispensable tool to find information and services in networks. Technological advances provide new opportunities to facilitate the process of collecting and maintaining metadata and to facilitate using catalogue systems. The overall objective is how to make best use of catalogue systems. Information systems such as the World Wide Web, Digital Libraries, inventories of satellite images and other repositories contain more data than ever before, are globally distributed, easy to use and, therefore, become accessible to huge, heterogeneous user groups. For KIIT Digital Library, we have used Resource Development Framework (RDF) and Dublin Core (DC) standards to incorporate metadata. Overall KIIT digital library provides electronic access to information in many different forms. Recent technological advances make the storage and transmission of digital information possible. This project is to design and implement a cataloguing system of the digital library system suitable for storage, indexing, and retrieving information and providing that information across the Internet. The goal is to allow users to quickly search indices to locate segments of interests and view and manipulate these segments on their remote computers

    COMPARATIVE ANALYSIS OF IMAGE RETRIEVAL TECHNIQUES IN CYBERSPACE

    Get PDF
    Purpose: With the popularity and remarkable usage of digital images in various domains, the existing image retrieval techniques need to be enhanced. The content-based image retrieval is playing a vital role to retrieve the requested data from the database available in cyberspace. CBIR from cyberspace is a popular and interesting research area nowadays for a better outcome. The searching and downloading of the requested images accurately based on meta-data from the cyberspace by using CBIR techniques is a challenging task. The purpose of this study is to explore the various image retrieval techniques for retrieving the data available in cyberspace.  Methodology: Whenever a user wishes to retrieve an image from the web, using present search engines, a bunch of images is retrieved based on a user query. But, most of the resultant images are unrelated to the user query. Here, the user puts their text-based query in the web-based search engine and compute the related images and retrieval time. Main Findings:  This study compares the accuracy and retrieval-time of the requested image. After the detailed analysis, the main finding is none of the used web-search engines viz. Flickr, Pixabay, Shutterstock, Bing, Everypixel, retrieved the accurate related images based on the entered query.   Implications: This study is discussing and performs a comparative analysis of various content-based image retrieval techniques from cyberspace. Novelty of Study: Research community has been making efforts towards efficient retrieval of useful images from the web but this problem has not been solved and it still prevails as an open research challenge. This study makes some efforts to resolve this research challenge and perform a comparative analysis of the outcome of various web-search engines

    Online Visual Image Resources and Reference Services: Understanding Preferred Resources

    Get PDF
    As students and teachers in higher education begin to use images in their courses, assignments, and research more frequently, new skills and literacies are needed to find and use images on the Web. Images can be found online in several different types of resources, including subscription image databases, freely available digital libraries and collections, user-generated collections such as Fickr or Picasa, and the general Web. Academic libraries and librarians can serve the image needs of their users by providing access to online image resources and visual literacy instruction. This paper presents a research study that explored the types of image reference questions librarians receive, the resources they use most often, and the difficulties of searching for images online

    Duplicate Image Detection using Machine Learning

    Get PDF
    In today\u27s digital age, the amount of data being generated and shared on a daily basis is growing at an unprecedented rate. With this growth comes the challenge of managing this vast amount of data effectively. That being said, there are approximately fifteen billion images shared on social media per day. The same image may exist in multiple locations in different formats, sizes, and with slight variations, making it difficult for end-users to filter and detect duplicate images. This duplication can lead to unnecessary storage costs, reduced data quality, and decreased productivity as users waste time searching for the right image.Detecting duplicate images is a crucial task in various fields and there is a growing need to automate this process. The primary objective of this project is to create a system that can identify duplicate images by comparing two images, even if they have slight differences in color, size, or format. To achieve the goal, we developed a system that detects and flags duplicates. The system utilizes various techniques such as visual similarity, image hashing, computer vision and Machine Learning techniques. The system is integrated into a web application that enables users to upload images and detects duplicates. The system also highlights the differences between the images. Overall, the development of a duplicate image detection web application can offer significant benefits to organizations with extensive image collections. By automating the process of identifying duplicate images, it can save time, reduce costs, and enhance the overall data quality.https://ecommons.udayton.edu/stander_posters/4005/thumbnail.jp

    Using content-based image retrieval for accessing images on the web for children

    Get PDF
    Children are among the most frequent and important users of Internet. The children can search any type of data in any digital forms in the digital libraries, web directories, or in many other media repositories. However, one possible limitation of searching these digital artifacts is that the young children have great difficulty in writing, spelling, and explaining their ideas during Internet search (lack of query formulation); this will limit expressing their intention. Language of the Internet are English, thus imposes problem for non-native children.In addition, miss-annotation of images on the Internet could expose children to porno images and sites. Thus, this paper reviews existing search engines for children and proposes a new concept to unify and for easy searching designed for children of various languages by using image search instead of keyword search.In our proposed idea, we shall use Content Based Image Retrieval(CBIR) technique to retrieve all relevant images to image queried by a child, this is a promising approach that will remove all of the above difticulties faced by the children and protect them from get the unsuitable results. This idea is part of an on-going research to build CBIR-based Image Search Engine

    MPEG-7 Based Image Retrieval on the World Wide Web

    Get PDF
    Due to the rapid growth of the number of digital media elements like image, video, audio, graphics on Internet, there is an increasing demand for effective search and retrieval techniques. Recently, many search engines have made image search as an option like Google, AlltheWeb, AltaVista, Freenet. In addition to this, Ditto, Picsearch, can search only the images on Internet. There are also other domain specific search engines available for graphics and clip art, audio, video, educational images, artwork, stock photos, science and nature [www.faganfinder.com/img]. These entire search engines are directory based. They crawls the entire Internet and index all the images in certain categories. They do not display the images in any particular order with respect to the time and context. With the availability of MPEG-7, a standard for describing multimedia content, it is now possible to store the images with its metadata in a structured format. This helps in searching and retrieving the images. The MPEG-7 standard uses XML to describe the content of multimedia information objects. These objects will have metadata information in the form of MPEG-7 or any other similar format associated with them. It can be used in different ways to search the objects. In this paper we propose a system, which can do content based image retrieval on the World Wide Web. It displays the result in user-defined order

    ALADIN: All Layer Adaptive Instance Normalization for Fine-grained Style Similarity

    Full text link
    We present ALADIN (All Layer AdaIN); a novel architecture for searching images based on the similarity of their artistic style. Representation learning is critical to visual search, where distance in the learned search embedding reflects image similarity. Learning an embedding that discriminates fine-grained variations in style is hard, due to the difficulty of defining and labelling style. ALADIN takes a weakly supervised approach to learning a representation for fine-grained style similarity of digital artworks, leveraging BAM-FG, a novel large-scale dataset of user generated content groupings gathered from the web. ALADIN sets a new state of the art accuracy for style-based visual search over both coarse labelled style data (BAM) and BAM-FG; a new 2.62 million image dataset of 310,000 fine-grained style groupings also contributed by this work

    Benchmarking Web-Based Image Retrieval

    Get PDF
    An explosion of digital photography technologies that permit quick and easy uploading of any image to the web, coupled with the proliferation of personal, recreational users of the internet over the past several years have resulted in millions of images being uploaded on the World Wide Web every day. Most of the uploaded images are not readily accessible as they are not organized so as to allow efficient searching, retrieval, and ultimately browsing. Currently major commercial search engines utilize a process known as Annotation Based Image Retrieval to execute search requests focused on retrieving an image. Despite the fact that the information sought is an image, the ABIR technique primarily relies on textual information associated with an image to complete the search and retrieval process. Using the game of cricket as the domain, this article compares the performance of three commonly used search engines for image retrieval: Google, Yahoo and MSN Live. Factors used for the evaluation of these search engines include query types, number of images retrieved, and the type of search engine. Results of the empirical evaluation show that while the Google search engine performed better than Yahoo and MSN Live in situations where there is no refiner, the performance of all three search engines dropped drastically when a refiner was added. Further research is needed to overcome the problems of manual annotation embodied in the annotationbased image retrieval problem
    • ā€¦
    corecore