44,932 research outputs found

    Optimizing E-Commerce Product Classification Using Transfer Learning

    Get PDF
    The global e-commerce market is snowballing at a rate of 23% per year. In 2017, retail e-commerce users were 1.66 billion and sales worldwide amounted to 2.3 trillion US dollars, and e-retail revenues are projected to grow to 4.88 trillion USD in 2021. With the immense popularity that e-commerce has gained over past few years comes the responsibility to deliver relevant results to provide rich user experience. In order to do this, it is essential that the products on the ecommerce website be organized correctly into their respective categories. Misclassification of products leads to irrelevant results for users which not just reflects badly on the website, it could also lead to lost customers. With ecommerce sites nowadays providing their portal as a platform for third party merchants to sell their products as well, maintaining a consistency in product categorization becomes difficult. Therefore, automating this process could be of great utilization. This task of automation done on the basis of text could lead to discrepancies since the website itself, its various merchants, and users, all could use different terminologies for a product and its category. Thus, using images becomes a plausible solution for this problem. Dealing with images can best be done using deep learning in the form of convolutional neural networks. This is a computationally expensive task, and in order to keep the accuracy of a traditional convolutional neural network while reducing the hours it takes for the model to train, this project aims at using a technique called transfer learning. Transfer learning refers to sharing the knowledge gained from one task for another where new model does not need to be trained from scratch in order to reduce the time it takes for training. This project aims at using various product images belonging to five categories from an ecommerce platform and developing an algorithm that can accurately classify products in their respective categories while taking as less time as possible. The goal is to first test the performance of transfer learning against traditional convolutional networks. Then the next step is to apply transfer learning to the downloaded dataset and assess its performance on the accuracy and time taken to classify test data that the model has never seen before

    Metadata for describing learning scenarios under European Higher Education Area paradigm

    Get PDF
    In this paper we identify the requirements for creating formal descriptions of learning scenarios designed under the European Higher Education Area paradigm, using competences and learning activities as the basic pieces of the learning process, instead of contents and learning resources, pursuing personalization. Classical arrangements of content based courses are no longer enough to describe all the richness of this new learning process, where user profiles, competences and complex hierarchical itineraries need to be properly combined. We study the intersection with the current IMS Learning Design specification and the additional metadata required for describing such learning scenarios. This new approach involves the use of case based learning and collaborative learning in order to acquire and develop competences, following adaptive learning paths in two structured levels

    Creating an environment for free education and technology-enhanced learning

    Full text link
    The purpose of this paper is to present a project aimed at making knowledge publically available through opene ducational resources (OER). The focus is on open online courses which will be created by educational institutions and best practice examples offered by leading companies, with the purpose to support life-long education and enhancement of academic education with practical knowledge. The goal is to create diverse high quality educational materials in electronic format, which will be publically available. The educational material will follow basic pedagogical-didactic principles, in order to best meet the needs of the potential learners. In accordance with that a review of didactic principles that can contribute to producing OER content of excellence is given. The choice of a convenient platform, as well as the application of appropriate information technologies enable content representation in a suitable, innovative and meaningful way

    DIDET: Digital libraries for distributed, innovative design education and teamwork. Final project report

    Get PDF
    The central goal of the DIDET Project was to enhance student learning opportunities by enabling them to partake in global, team based design engineering projects, in which they directly experience different cultural contexts and access a variety of digital information sources via a range of appropriate technology. To achieve this overall project goal, the project delivered on the following objectives: 1. Teach engineering information retrieval, manipulation, and archiving skills to students studying on engineering degree programs. 2. Measure the use of those skills in design projects in all years of an undergraduate degree program. 3. Measure the learning performance in engineering design courses affected by the provision of access to information that would have been otherwise difficult to access. 4. Measure student learning performance in different cultural contexts that influence the use of alternative sources of information and varying forms of Information and Communications Technology. 5. Develop and provide workshops for staff development. 6. Use the measurement results to annually redesign course content and the digital libraries technology. The overall DIDET Project approach was to develop, implement, use and evaluate a testbed to improve the teaching and learning of students partaking in global team based design projects. The use of digital libraries and virtual design studios was used to fundamentally change the way design engineering is taught at the collaborating institutions. This was done by implementing a digital library at the partner institutions to improve learning in the field of Design Engineering and by developing a Global Team Design Project run as part of assessed classes at Strathclyde, Stanford and Olin. Evaluation was carried out on an ongoing basis and fed back into project development, both on the class teaching model and the LauLima system developed at Strathclyde to support teaching and learning. Major findings include the requirement to overcome technological, pedagogical and cultural issues for successful elearning implementations. A need for strong leadership has been identified, particularly to exploit the benefits of cross-discipline team working. One major project output still being developed is a DIDET Project Framework for Distributed Innovative Design, Education and Teamwork to encapsulate all project findings and outputs. The project achieved its goal of embedding major change to the teaching of Design Engineering and Strathclyde's new Global Design class has been both successful and popular with students

    Automated user modeling for personalized digital libraries

    Get PDF
    Digital libraries (DL) have become one of the most typical ways of accessing any kind of digitalized information. Due to this key role, users welcome any improvements on the services they receive from digital libraries. One trend used to improve digital services is through personalization. Up to now, the most common approach for personalization in digital libraries has been user-driven. Nevertheless, the design of efficient personalized services has to be done, at least in part, in an automatic way. In this context, machine learning techniques automate the process of constructing user models. This paper proposes a new approach to construct digital libraries that satisfy user’s necessity for information: Adaptive Digital Libraries, libraries that automatically learn user preferences and goals and personalize their interaction using this information

    A tool for metadata analysis

    Get PDF
    We describe a Web-based metadata quality tool that provides statistical descriptions and visualisations of Dublin Core metadata harvested via the OAI protocol. The lightweight nature of development allows it to be used to gather contextualized requirements and some initial user feedback is discussed

    Managing evolution and change in web-based teaching and learning environments

    Get PDF
    The state of the art in information technology and educational technologies is evolving constantly. Courses taught are subject to constant change from organisational and subject-specific reasons. Evolution and change affect educators and developers of computer-based teaching and learning environments alike – both often being unprepared to respond effectively. A large number of educational systems are designed and developed without change and evolution in mind. We will present our approach to the design and maintenance of these systems in rapidly evolving environments and illustrate the consequences of evolution and change for these systems and for the educators and developers responsible for their implementation and deployment. We discuss various factors of change, illustrated by a Web-based virtual course, with the objective of raising an awareness of this issue of evolution and change in computer-supported teaching and learning environments. This discussion leads towards the establishment of a development and management framework for teaching and learning systems

    Towards robust and reliable multimedia analysis through semantic integration of services

    Get PDF
    Thanks to ubiquitous Web connectivity and portable multimedia devices, it has never been so easy to produce and distribute new multimedia resources such as videos, photos, and audio. This ever-increasing production leads to an information overload for consumers, which calls for efficient multimedia retrieval techniques. Multimedia resources can be efficiently retrieved using their metadata, but the multimedia analysis methods that can automatically generate this metadata are currently not reliable enough for highly diverse multimedia content. A reliable and automatic method for analyzing general multimedia content is needed. We introduce a domain-agnostic framework that annotates multimedia resources using currently available multimedia analysis methods. By using a three-step reasoning cycle, this framework can assess and improve the quality of multimedia analysis results, by consecutively (1) combining analysis results effectively, (2) predicting which results might need improvement, and (3) invoking compatible analysis methods to retrieve new results. By using semantic descriptions for the Web services that wrap the multimedia analysis methods, compatible services can be automatically selected. By using additional semantic reasoning on these semantic descriptions, the different services can be repurposed across different use cases. We evaluated this problem-agnostic framework in the context of video face detection, and showed that it is capable of providing the best analysis results regardless of the input video. The proposed methodology can serve as a basis to build a generic multimedia annotation platform, which returns reliable results for diverse multimedia analysis problems. This allows for better metadata generation, and improves the efficient retrieval of multimedia resources

    Enabling quantitative data analysis through e-infrastructures

    Get PDF
    This paper discusses how quantitative data analysis in the social sciences can engage with and exploit an e-Infrastructure. We highlight how a number of activities which are central to quantitative data analysis, referred to as ‘data management’, can benefit from e-infrastructure support. We conclude by discussing how these issues are relevant to the DAMES (Data Management through e-Social Science) research Node, an ongoing project that aims to develop e-Infrastructural resources for quantitative data analysis in the social sciences

    Metadata enrichment for digital heritage: users as co-creators

    Get PDF
    This paper espouses the concept of metadata enrichment through an expert and user-focused approach to metadata creation and management. To this end, it is argued the Web 2.0 paradigm enables users to be proactive metadata creators. As Shirky (2008, p.47) argues Web 2.0’s social tools enable “action by loosely structured groups, operating without managerial direction and outside the profit motive”. Lagoze (2010, p. 37) advises, “the participatory nature of Web 2.0 should not be dismissed as just a popular phenomenon [or fad]”. Carletti (2016) proposes a participatory digital cultural heritage approach where Web 2.0 approaches such as crowdsourcing can be sued to enrich digital cultural objects. It is argued that “heritage crowdsourcing, community-centred projects or other forms of public participation”. On the other hand, the new collaborative approaches of Web 2.0 neither negate nor replace contemporary standards-based metadata approaches. Hence, this paper proposes a mixed metadata approach where user created metadata augments expert-created metadata and vice versa. The metadata creation process no longer remains to be the sole prerogative of the metadata expert. The Web 2.0 collaborative environment would now allow users to participate in both adding and re-using metadata. The case of expert-created (standards-based, top-down) and user-generated metadata (socially-constructed, bottom-up) approach to metadata are complementary rather than mutually-exclusive. The two approaches are often mistakenly considered as dichotomies, albeit incorrectly (Gruber, 2007; Wright, 2007) . This paper espouses the importance of enriching digital information objects with descriptions pertaining the about-ness of information objects. Such richness and diversity of description, it is argued, could chiefly be achieved by involving users in the metadata creation process. This paper presents the importance of the paradigm of metadata enriching and metadata filtering for the cultural heritage domain. Metadata enriching states that a priori metadata that is instantiated and granularly structured by metadata experts is continually enriched through socially-constructed (post-hoc) metadata, whereby users are pro-actively engaged in co-creating metadata. The principle also states that metadata that is enriched is also contextually and semantically linked and openly accessible. In addition, metadata filtering states that metadata resulting from implementing the principle of enriching should be displayed for users in line with their needs and convenience. In both enriching and filtering, users should be considered as prosumers, resulting in what is called collective metadata intelligence
    corecore