2,111 research outputs found

    PROCESS OPTIMIZATION AND AUTOMATION IN E-COMMERCE BUSINESS OPERATION

    Get PDF
    Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales. I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way. I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way.Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales. I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way. I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way

    The Means Structure of Information Resources Processing in Electronic Content Commerce Systems

    Get PDF
    Some of principal problems of electronic content commerce and functional services of content processing are analyzed in the article. Proposed method gives an opportunity to form resources processing tools for electronic commerce systems so as implement subsystems for content formation, management and support

    Continuous Ranking of Estonian Public Sector Web Sites With Respect to WCAG 2.0 Guidelines

    Get PDF
    LigipÀÀsetavus avaliku sektori veebilehtedele on hiljuti tunnistatud ĂŒheks eesmĂ€rgiks nii Euroopa liidu kui ka muude valitsuste poolt. Selleks, et ligipÀÀsetavuse astet mÔÔta, on vĂ”etud mÔÔdupuuks WCAG 2.0 juhised, mille jĂ€rgimise hindamine on pooleldi automatiseeritud. KĂ”ikide kriteeriumite hindamine vajab siiski veel palju inimtööjĂ”udu ja subjektiivsust. Inimressursi kokkuhoidmiseks piiratakse hindamine enamasti mĂ”ne lehekĂŒljeni iga domeeni kohta. Antud lĂ”putöös pĂŒĂŒtakse parendada automatiseerimist: esiteks uuritakse inimhindajate strateegiat, teiseks analĂŒĂŒsitakse, kas suurema arvu lehtede hindamine ĂŒhes domeenis\n\rmĂ”jutab lĂ”plikku hinnangut. Eksperimentaalsed tulemused nĂ€itavad, et inimeste hindamine on lĂ€hemal pooleldi lubavale ja keelavale strateegiale.\n\rKĂ”rgema arvu lehtede hindamine ei mĂ”ju positiivselt hindamise tĂ€psusele vĂ”rreldes inimeste hinnangutega.Accessibility of public sector Web sites has been recently recognized as one of the objectives of governments of EU member states and countries\n\relsewhere. In order to measure in which extent the accessibility has been achieved, WCAG 2.0 guidelines have been adopted as a benchmark measure.\n\rAlthough conformance to the guidelines has been partially automated, there is still a lot of human effort and subjectivity involved in the evaluation process. Furthermore, due to human involvement, evaluation is mostly narrowed down to a limited set of Web pages of a domain under evaluation. This study aims to make another step toward evaluation automation by 1) reverse-engineering strategies of humans evaluators and 2) analyzing whether higher number of evaluated Web pages will have positive effect to the final ranking. The experimental results show that human ranking is closer to semi-permissive and restrictive evaluation strategies. Furthermore, we show that higher number of evaluated pages will not have positive impact on the evalution precision wrt human rankings

    Knowledge society arguments revisited in the semantic technologies era

    No full text
    In the light of high profile governmental and international efforts to realise the knowledge society, I review the arguments made for and against it from a technology standpoint. I focus on advanced knowledge technologies with applications on a large scale and in open- ended environments like the World Wide Web and its ambitious extension, the Semantic Web. I argue for a greater role of social networks in a knowledge society and I explore the recent developments in mechanised trust, knowledge certification, and speculate on their blending with traditional societal institutions. These form the basis of a sketched roadmap for enabling technologies for a knowledge society

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Augmenting the performance of image similarity search through crowdsourcing

    Get PDF
    Crowdsourcing is defined as “outsourcing a task that is traditionally performed by an employee to a large group of people in the form of an open call” (Howe 2006). Many platforms designed to perform several types of crowdsourcing and studies have shown that results produced by crowds in crowdsourcing platforms are generally accurate and reliable. Crowdsourcing can provide a fast and efficient way to use the power of human computation to solve problems that are difficult for machines to perform. From several different microtasking crowdsourcing platforms available, we decided to perform our study using Amazon Mechanical Turk. In the context of our research we studied the effect of user interface design and its corresponding cognitive load on the performance of crowd-produced results. Our results highlighted the importance of a well-designed user interface on crowdsourcing performance. Using crowdsourcing platforms such as Amazon Mechanical Turk, we can utilize humans to solve problems that are difficult for computers, such as image similarity search. However, in tasks like image similarity search, it is more efficient to design a hybrid human–machine system. In the context of our research, we studied the effect of involving the crowd on the performance of an image similarity search system and proposed a hybrid human–machine image similarity search system. Our proposed system uses machine power to perform heavy computations and to search for similar images within the image dataset and uses crowdsourcing to refine results. We designed our content-based image retrieval (CBIR) system using SIFT, SURF, SURF128 and ORB feature detector/descriptors and compared the performance of the system using each feature detector/descriptor. Our experiment confirmed that crowdsourcing can dramatically improve the CBIR system performance

    Search Engine Optimization

    Get PDF
    This Special Issue book focuses on the theory and practice of search engine optimization (SEO). It is intended for anyone who publishes content online and it includes five peer-reviewed papers from various researchers. More specifically, the book includes theoretical and case study contributions which review and synthesize important aspects, including, but not limited to, the following themes: theory of SEO, different types of SEO, SEO criteria evaluation, search engine algorithms, social media and SEO, and SEO applications in various industries, as well as SEO on media websites. The book aims to give a better understanding of the importance of SEO in the current state of the Internet and online information search. Even though SEO is widely used by marketing practitioners, there is a relatively small amount of academic research that systematically attempts to capture this phenomenon and its impact across different industries. Thus, this collection of studies offers useful insights, as well as a valuable resource that intends to open the door for future SEO-related research

    D2.3.3 Evaluation results of the LinkedUp VICI competition

    Get PDF
    This document D2.3.3 is the final report of Task 2.4 – Evaluation of challenge submissions. Task 2.4 is about the actual assessment of the participating projects within the LinkedUp Veni, Vidi and Vici competition on the basis of the LinkedUp Evaluation Framework (D2.2.1). The main objective of Task 2.4 is to summarise and report the outcomes of the various competitions and analyse the practical experiences of the experts with the LinkedUp Evaluation Framework to further improve the evaluation framework. In the current document D2.3.3 we report about the Linked Data tools and ideas that have been submitted to the third and final data competition - Vici. In total, we received 13 submissions, 10 of them have been shortlisted and invited to a poster presentation at the 13th International Semantic Web Conference (ISWC 2014), four of them have been awarded by the LinkedUp evaluation procedure. In addition, an audience price has been awarded at the ISWC. This deliverable shortly lists the Vici submissions, explains the evaluation procedure that resulted in a short list of the best submissions, justifies the decision for the winners, and also reports the experiences collected within Vici.LinkedU
    • 

    corecore