833 research outputs found

    Grocery Shopping Assistant Using OpenCV

    Get PDF
    In this paper we present an android mobile application that allows user to keep track of food products and grocery items bought during each grocery shopping along with its nutrient information. This application allows user to get nutrient information of products and grocery by just taking a photo. Product matching is performed using SURF feature detection followed by FLANN feature matching. We extract the table from the nutrient fact table image using concepts of erosion, dilation and contour detection. Classifying the grocery is done using Object Categorization through the concepts of Bag of Words (BOW) and SVM machine learning. This application includes three main subsystems: client (Android), server (Node.js) and image processing (OpenCV)

    Deep Cascade Multi-task Learning for Slot Filling in Online Shopping Assistant

    Full text link
    Slot filling is a critical task in natural language understanding (NLU) for dialog systems. State-of-the-art approaches treat it as a sequence labeling problem and adopt such models as BiLSTM-CRF. While these models work relatively well on standard benchmark datasets, they face challenges in the context of E-commerce where the slot labels are more informative and carry richer expressions. In this work, inspired by the unique structure of E-commerce knowledge base, we propose a novel multi-task model with cascade and residual connections, which jointly learns segment tagging, named entity tagging and slot filling. Experiments show the effectiveness of the proposed cascade and residual structures. Our model has a 14.6% advantage in F1 score over the strong baseline methods on a new Chinese E-commerce shopping assistant dataset, while achieving competitive accuracies on a standard dataset. Furthermore, online test deployed on such dominant E-commerce platform shows 130% improvement on accuracy of understanding user utterances. Our model has already gone into production in the E-commerce platform.Comment: AAAI 201

    SHOPPING ASSISTANT MENGGUNAKAN AUGMENTED REALITY BERBASIS ANDROID

    Get PDF
    Augmented reality merupakan kecerdasan buatan yang dapat dikendalikan secara langsung oleh pengguna dengan visual 3 dimensi yang dapat dimanfaatkan untuk bisnis. Dalam hal ini, Toko Segoro Antique dijadikan sebagai tempat penelitian karena semakin banyak macam barang yang dijual sehingga pemilik toko sering kali kewalahan melayani pembeli. Diperlukan asisten pembelanjaan yang dapat membantu melayani semua pembeli sekaligus dengan augmented reality konsumen dapat mengetahui informasi rinci tentang barang secara cepat. Aplikasi yang dikembangkan berfungsi mengenali barang dan diberi nama Shopping Asisstant. Metode yang digunakan dalam perancangan aplikasi ini adalah Rational Unified Process yang biasa digunakan untuk mengembangkan sistem berbasis objek. Shopping Assistant dapat mengenali secara langsung barang-barang yang ada didalam toko cukup dengan scan barang menggunakan smartphone. Aplikasi ini menggunakan Vuforia sebagai software library yang berguna sebagai database sekaligus sistem tracking yang digunakan saat proses pengenalan barang. Hasilnya dengan augmented reality pengunjung dapat mengenali barang akan tetapi pada proses pengenalan barang, lingkungan pengenalan harus sesuai dengan lingkungan pencahayaan saat barang di scan

    The Role of Similarity in e-Commerce Interactions: The Case of Online Shopping Assistants

    Get PDF
    This research proposes that technological artifacts are perceived as social actors, and that users can make personality and behavioral attributions towards them. These formed perceptions interact with the user’s own characteristics in the form of an evaluation of similarity. Using an automated shopping assistant, the study investigates the effects of two types of perceived similarity on a number of dependent variables. The results show that both, perceived personality similarity, as well as perceived behavioral similarity, between the user and the decision aid positively affect users’ evaluations of the technological artifact. Furthermore, the study investigates the role of design characteristics in forming social perceptions about the shopping assistant. The results indicate that design characteristics, namely content, can be used to manifest desired personalities and behaviors, allowing us to compute measures of “actual” similarity, which were found to predict perceived similarity

    WishToys : a Web-based electronic shopping assistant featuring voice access

    Get PDF
    This report describes an electronic shopping assistant specialized in toys. The shopping assistant features two distinct user interfaces. One interface is a web-based graphical user interface (GUI) that fully uses the relatively high bandwidth of the Internet connection, allowing for high resolution and high colour content graphics. The GUI interface offers the full functionality of the shopping assistant. The other interface is a low-bandwidth voice user interface (VUI), offering a reduced set of features and allowing an easy and quick access to pre-selected list of toys for people on the move. The VUI is geared specifically towards mobile use, i.e. for users calling from cell phones

    SANIP: Shopping Assistant and Navigation for the visually impaired

    Full text link
    The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.Comment: 6 pages, 8 figures. arXiv admin note: text overlap with arXiv:2011.04244 by other author

    INVESTIGATING CONSUMERS’ ADOPTION OF INTERACTIVE IN-STORE MOBILE SHOPPING ASSISTANT

    Get PDF
    With smart phones being deployed widely, interactive in-store Mobile Shopping Assistant (MSA) systems can be considered as an effective way for assisting in-store shopping and can become potentially the pervasive personalized services that both consumers and merchant can trust. However, few studies have focused on investigating the adoption of in-store MSA. Therefore, this study examined the consumers’ attitude and acceptance toward in-store MSA services under the framework of the technology acceptance model (TAM). The findings imply that attitude, perceived ease of use, perceived usefulness, environmental variables, perceived quality of the MSA system, social influence, and user satisfaction are some determinant factors. In addition, significant differences exist between female and male consumers

    Supervised Transfer Learning for Product Information Question Answering

    Full text link
    Popular e-commerce websites such as Amazon offer community question answering systems for users to pose product related questions and experienced customers may provide answers voluntarily. In this paper, we show that the large volume of existing community question answering data can be beneficial when building a system for answering questions related to product facts and specifications. Our experimental results demonstrate that the performance of a model for answering questions related to products listed in the Home Depot website can be improved by a large margin via a simple transfer learning technique from an existing large-scale Amazon community question answering dataset. Transfer learning can result in an increase of about 10% in accuracy in the experimental setting where we restrict the size of the data of the target task used for training. As an application of this work, we integrate the best performing model trained in this work into a mobile-based shopping assistant and show its usefulness.Comment: 2018 17th IEEE International Conference on Machine Learning and Application

    Shopping Assistant App For People With Visual Impairment: An Acceptance Evaluation

    Get PDF
    Visual impairment refers to when someone lose part or all of the ability to see. People with visual impairment has many limitations including the freedom of doing grocery shopping independently. They will have difficulty to read ingredients or dietary information which usually returned in small font letters on the products. This information is deemed important to make informed decision in order to purchase products. Therefore, this research is conducted to investigate the need of grocery shopping assistant app for people with visual impairment and their acceptance level. An empirical investigation method is adapted and data was collected based on Technology Acceptance Model (TAM). The evaluation results indicate that the people with visual impairment positively inclined towards utilizing shopping assistant app caused by the technology is easy to use and therefore they can obtain benefit from the app, concluding that Perceived Ease of Use is a better indicator for the attitude towards using the shopping assistant app
    • …
    corecore