945,472 research outputs found

    Simplifying resource discovery and access in academic libraries : implementing and evaluating Summon at Huddersfield and Northumbria Universities

    Get PDF
    Facilitating information discovery and maximising value for money from library materials is a key driver for academic libraries, which spend substantial sums of money on journal, database and book purchasing. Users are confused by the complexity of our collections and the multiple platforms to access them and are reluctant to spend time learning about individual resources and how to use them - comparing this unfavourably to popular and intuitive search engines like Google. As a consequence the library may be seen as too complicated and time consuming and many of our most valuable resources remain undiscovered and underused. Federated search tools were the first commercial products to address this problem. They work by using a single search box to interrogate multiple databases (including Library catalogues) and journal platforms. While going some way to address the problem, many users complained that they were still relatively slow, clunky and complicated to use compared to Google or Google Scholar. The emergence of web-scale discovery services in 2009 promised to deal with some of these problems. By harvesting and indexing metadata direct from publishers and local library collections into a single index they facilitate resource discovery and access to multiple library collections (whether in print or electronic form) via a single search box. Users no longer have to negotiate a number of separate platforms to find different types of information and because the data is held in a single unified index searching is fast and easy. In 2009 both Huddersfield and Northumbria Universities purchased Serials Solutions Summon. This case study report describes the selection, implementation and testing of Summon at both Universities drawing out common themes as well as differences; there are suggestions for those who intend to implement Summon in the future and some suggestions for future development

    Transformer-empowered Multi-modal Item Embedding for Enhanced Image Search in E-Commerce

    Full text link
    Over the past decade, significant advances have been made in the field of image search for e-commerce applications. Traditional image-to-image retrieval models, which focus solely on image details such as texture, tend to overlook useful semantic information contained within the images. As a result, the retrieved products might possess similar image details, but fail to fulfil the user's search goals. Moreover, the use of image-to-image retrieval models for products containing multiple images results in significant online product feature storage overhead and complex mapping implementations. In this paper, we report the design and deployment of the proposed Multi-modal Item Embedding Model (MIEM) to address these limitations. It is capable of utilizing both textual information and multiple images about a product to construct meaningful product features. By leveraging semantic information from images, MIEM effectively supplements the image search process, improving the overall accuracy of retrieval results. MIEM has become an integral part of the Shopee image search platform. Since its deployment in March 2023, it has achieved a remarkable 9.90% increase in terms of clicks per user and a 4.23% boost in terms of orders per user for the image search feature on the Shopee e-commerce platform.Comment: Accepted by IAAI 202

    Watershed rainfall forecasting using neuro-fuzzy networks with the assimilation of multi-sensor information

    Get PDF
    The complex temporal heterogeneity of rainfall coupled with mountainous physiographic context makes a great challenge in the development of accurate short-term rainfall forecasts. This study aims to explore the effectiveness of multiple rainfall sources (gauge measurement, and radar and satellite products) for assimilation-based multi-sensor precipitation estimates and make multi-step-ahead rainfall forecasts based on the assimilated precipitation. Bias correction procedures for both radar and satellite precipitation products were first built, and the radar and satellite precipitation products were generated through the Quantitative Precipitation Estimation and Segregation Using Multiple Sensors (QPESUMS) and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), respectively. Next, the synthesized assimilated precipitation was obtained by merging three precipitation sources (gauges, radars and satellites) according to their individual weighting factors optimized by nonlinear search methods. Finally, the multi-step-ahead rainfall forecasting was carried out by using the adaptive network-based fuzzy inference system (ANFIS). The Shihmen Reservoir watershed in northern Taiwan was the study area, where 641 hourly data sets of thirteen historical typhoon events were collected. Results revealed that the bias adjustments in QPESUMS and PERSIANN-CCS products did improve the accuracy of these precipitation products (in particular, 30-60% improvement rates for the QPESUMS, in terms of RMSE), and the adjusted PERSIANN-CCS and QPESUMS individually provided about 10% and 24% contribution accordingly to the assimilated precipitation. As far as rainfall forecasting is concerned, the results demonstrated that the ANFIS fed with the assimilated precipitation provided reliable and stable forecasts with the correlation coefficients higher than 0.85 and 0.72 for one- and two-hour-ahead rainfall forecasting, respectively. The obtained forecasting results are very valuable information for the flood warning in the study watershed during typhoon periods. © 2013 Elsevier B.V

    Determining Product Complementarity By Analyzing Search Query Streams

    Get PDF
    Determining complementarity between products when viewing products online, e.g., via a shopping website or a search engine, is not straightforward. The information required to determine product complementarity is unstructured and/or spread across multiple sources. While co-purchase data can be used to infer complementarity, this requires large volumes of purchase data that is often unavailable. This disclosure describes techniques to identify relationships between products based on analysis of web search queries, subsequent clicks or other user actions, obtained with user permission. The techniques rely on the observation that query streams often include searches for related products, e.g., in the form of “A for B” queries (or queries in other forms) when searching for a product A compatible with a product B. Entity extraction is performed on such query streams and a machine learning model is utilized to identify likely complementary products

    Searching for the Semantic Internet

    Get PDF
    Search engines, directories and web browsers all deal with the Internet at the level of individual web-pages. We argue that this is too low a level of resolution for many, including the non-casual surfer, who has detailed knowledge of his/her topic of interest. We present the shopping-mall metaphor that is based on identifying tightly integrated communities of web pages, where pages procure information from each other via hyperlinks. A search operation identifies these web-page communities, rather that individual web-pages, and the communities are visualised as a Virtual Reality shopping mall - for presentation on a VRML enabled web browser. Each information outlet (shop) can contain multiple information “products” (pages) gathered around a common theme. The metaphor serves to integrate both search and visualisation phases, presenting a coherent information collection to the user - regardless of the search domain

    UC-268 Buggy - Price Scraper Application

    Get PDF
    Buggy is a desktop price scraper application that finds the best deals for users across several major retailers. Retailers include Amazon, Costco, Target, Walmart, and eBay. It simplifies bargain hunting by condensing product information from multiple websites into one, convenient place. Users can save products for future access, track search history, and compare search results with tabs for navigation. When they are ready to buy, they can simply click on the buy button and are directed to the vendor\u27s page to check out

    Is Online Product Information Availability Driven by Quality or Differentiation?

    Get PDF
    We present a game theoretic model for the availability of product information in Internet markets, where buyers can search for multiple products in parallel. We use a multiple-circle variation of Salop’s “unit circle” model of product differentiation where vendors are able to differentiate their products both horizontally (taste) and vertically (quality). We explore the conditions under which vendors make horizontal and vertical product information available to potential customers in equilibrium. We demonstrate that vendors will choose not to provide their full horizontal product information, and will rather leave the buyers with some probabilistic knowledge about their exact horizontal product locations. However, the vendors will release enough horizontal product information for their products to appear distinct from those of competitors. The sellers’ incentives to disseminate vertical product information are shown to be fundamentally different: only the worst possible quality vendors will withhold information on vertical product parameters. Our results suggest an answer for the question that is the title of this paper: Is it the case that online vendors release product information primarily to advertise their product’s superiority or to make clear that their products do not have close competing substitutes? We find that for high quality products the former is more important while the latter gains significance for lower quality products. We present empirical observations of nearly 2,000 products in the PC game industry that provide evidence in favor of the model’s predictions

    Buyers’ Dynamic Click Behavior on Digital Sales Platforms with Complementarities

    Get PDF
    With numerous options available on digital platforms, buying products is becoming an increasingly complex decision-making task. Many well-known digital sales platforms like Amazon, Uber, Etsy, or Airbnb try to offer products or services that best match the buyers’ search criteria but Amazon for example often also lists products that can possibly complement the best match (Sloane, 2018). Buyers have access to product and price information, and they have to consider multiple factors in making their decisions on checking available options (Karimi et al., 2015). The buyer needs to know how the platform chooses which products or services to display. The buyers’ decision might also be impacted by the way sellers of products are charged to display their products or services on the platform and by the order in which the products are displayed on the screen. Buyers have to keep track of the prices and deals offered related to the various products they have checked on the platform and also consider their opportunity cost of search. As the complexity and cost of the search process increases, there are searches that often end without success. Understanding better how buyers make click decisions dynamically can help platforms increase the success of product searches, buyer satisfaction and ultimately, profitability. In this study we focus on platforms which offer both primary products (products who best match the buyers’ search criteria) and secondary products (products who complement the primary products) and they rank these products on a buyer’s screen either by relevance or by click-through rate ((Hao et al., 2020). We aim to find answer to the following question: To what extent do product values and product prices determine the order in which the buyer clicks through the primary and secondary products. To answer this question, we create a dynamic model that predicts each step in a buyer’s click strategy. The model incorporates rational decision-making as well as known behavioral biases. Under naturally occurring circumstances information, such as the value of a product to a buyer, is strictly private and unavailable. Therefore, we use lab experiments with human subjects to test our model. The model is able to predict a higher percentage of buyer click behavior than existing static search models. Unlike static search models our model predicts a non-zero percentage of clicks on more than two products and provides some guidance on the factors that can lead buyers to make that decision. This study contributes to the theory of shopping on digital platforms because it is a model of sequential search that incorporates rational decision making as well as known human behavioral biases to explain how buyers shop in sequence given the information they discover. As far as we know this is also the first dynamic model that incorporates product complementarities as part of the decision-making environment

    SPM: Structured Pretraining and Matching Architectures for Relevance Modeling in Meituan Search

    Full text link
    In e-commerce search, relevance between query and documents is an essential requirement for satisfying user experience. Different from traditional e-commerce platforms that offer products, users search on life service platforms such as Meituan mainly for product providers, which usually have abundant structured information, e.g. name, address, category, thousands of products. Modeling search relevance with these rich structured contents is challenging due to the following issues: (1) there is language distribution discrepancy among different fields of structured document, making it difficult to directly adopt off-the-shelf pretrained language model based methods like BERT. (2) different fields usually have different importance and their length vary greatly, making it difficult to extract document information helpful for relevance matching. To tackle these issues, in this paper we propose a novel two-stage pretraining and matching architecture for relevance matching with rich structured documents. At pretraining stage, we propose an effective pretraining method that employs both query and multiple fields of document as inputs, including an effective information compression method for lengthy fields. At relevance matching stage, a novel matching method is proposed by leveraging domain knowledge in search query to generate more effective document representations for relevance scoring. Extensive offline experiments and online A/B tests on millions of users verify that the proposed architectures effectively improve the performance of relevance modeling. The model has already been deployed online, serving the search traffic of Meituan for over a year.Comment: Accepted by CIKM '2
    • …
    corecore