1,087 research outputs found

    Deployment of a software infrastructure for Ecommerce and business analytics in a small business

    Get PDF
    This project will be a simulation of a real business project that can be found in the enterprise world. We will create an Ecommerce framework for a fictional medium-sized company and we will connect it to a Business Intelligence & Data Visualization software in order to let the enterprise perform business analytics, see their current performance, and provide enough data to final users (e.g. CEOs) to let them to make better decisions. In order to do this project, we will first review the current possibilities to accomplish this type of project. Therefore, we will perform an analysis of the software being used in the market to decide the better one for a medium-sized business, accomplishing all the basic requirements it would need. So, we will do this analysis for the Ecommerce and Data Visualization software. And finally, we will study different solutions to post the shop on the web. For this last goal of posting the web online, we will use a server service that provides a set of instances that can be powerful enough to support an Ecommerce platform. The server service finally selected is Amazon Web Services. As Ecommerce platform, we will use Magento, an open source software, which has been leading the market of Ecommerce platforms for several years [1] [2]. Magento can be modified to create nearly any required feature (e.g. adding a contact section), and also, there is a huge community where other developers post their questions and/or solutions, solving problems, and of course implementing new functionalities. For the Data Visualization part, we will use one of the main leaders on this section: Tableau. It is a powerful tool mainly centralized in performing Visualization procedures. Although it is not open source, and therefore it is closed to its current features, it provides everything that is needed for this project. We will use their Student full version to simulate what would the purchased version provide. Then we will unify all these software elements so we can make business analysis with Tableau, using the information from Magento Ecommerce. We have investigated how to connect these two software frameworks, and also how to get the appropriate information to show in the Data Visualization app. Finally, with the information obtained from Magento, we will use Tableau and create example charts as an introduction to the potential reports, etc. we can create with a tool like Tableau. We will define the tasks and estimate a longitude of the project. Taking into account the tasks and the nature of the project, we will use as work methodology the Waterfall approach, performing only one delivery, as the requirements will not change during the course of the project, and therefore agile methods are not applicable. We will end with the conclusions extracted from the project and the future work that can be added to have a complete and more powerful solution.IngenierĂ­a InformĂĄtic

    A conceptual framework for semantic web-based ecommerce

    Get PDF

    Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

    Get PDF
    Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities

    From Business Understanding to Deployment: An application of Machine Learning Algorithms to Forecast Customer Visits per Hour to a Fast-Casual Restaurant in Dublin

    Get PDF
    This research project identifies the significant factors that affects the number of customer visits to a fast-casual restaurant every hour and proceeds to develop several machine learning models to forecast customer visits. The core value proposition of fast-casual restaurants is quality food delivered at speed which means they have to prepare meals in advance of customers visit but the problem with this approach is in forecasting future demand, under estimating demand could lead to inadequate meal preparation which would leave customers unsatisfied while over estimation of demand could lead to wastage especially with restaurants having to comply with food safety regulations whereby heated food not consumed within 90 minutes has to be discarded. Hourly forecasting of demand as opposed to monthly or even daily forecasting is important to help the manager of the fast-casual restaurant optimize resources and reduce wastage. Approaches to forecasting demand can be broadly categorized into qualitative and quantitative methods. Quantitative methods can be further divided into time series and regression-based methods. The regression-based approach which is used for this study enabled the researcher to gather data on several factors hypothesized to have an impact on the number of customer visits to the fast-casual restaurant every hour, carry out an experiment to test for the significance of these factors and to develop several predictive machine learning models capable of predicting the number of customer visits every hour. The results of the experiments carried out shows that hour, day, public holidays, temperature, humidity, rain and windspeed are significant factors in predicting the number of hourly customer visits. Multiple linear regression, regression tree, random forest and gradient boosting machine learning algorithms were also trained to predict the number of customer visits with the Gradient boosting algorithm achieving the lowest Mean Absolute Percentage Error(MAPE) of 18.82%

    An Automated Negotiation System for eCommerce Store Owners to Enable Flexible Product Pricing

    Get PDF
    If a store owner wishes to sell a product online, they traditionally have two options for deciding on a price. They can sell the product at a fixesd price like the products sold on sites like Amazon, or they can put the product in an auction and let demand from customers drive the final sales price like the products sold on sites like eBay. Both options have their pros and cons. An alternative option for deciding on a final sales price for the product is to enable negotiation on the product. With this, there is a dynamic nature to the price; each customer can negotiate with the store owner on the price which allows the final sales price to both change over time and on a customer by customer basis. The issue with enabling negotiation in the context of eCommerce is the time investment needed from the store owner. A store owner cannot negotiate every time an offer comes in from a potential customer, the potential time investment would not be acceptable. Using software agents to automate the process of negotiation for the seller is a potential solution to enabling negotiation in eCommerce for store owners. In this research, a system such as the one just described is developed in a way that mirrors real life negotiations more closely and after evaluation, is found to be a potential solution for the enabling of negotiation in eCommerce

    Topic driven testing

    Get PDF
    Modern interactive applications offer so many interaction opportunities that automated exploration and testing becomes practically impossible without some domain specific guidance towards relevant functionality. In this dissertation, we present a novel fundamental graphical user interface testing method called topic-driven testing. We mine the semantic meaning of interactive elements, guide testing, and identify core functionality of applications. The semantic interpretation is close to human understanding and allows us to learn specifications and transfer knowledge across multiple applications independent of the underlying device, platform, programming language, or technology stack—to the best of our knowledge a unique feature of our technique. Our tool ATTABOY is able to take an existing Web application test suite say from Amazon, execute it on ebay, and thus guide testing to relevant core functionality. Tested on different application domains such as eCommerce, news pages, mail clients, it can trans- fer on average sixty percent of the tested application behavior to new apps—without any human intervention. On top of that, topic-driven testing can go with even more vague instructions of how-to descriptions or use-case descriptions. Given an instruction, say “add item to shopping cart”, it tests the specified behavior in an application–both in a browser as well as in mobile apps. It thus improves state-of-the-art UI testing frame- works, creates change resilient UI tests, and lays the foundation for learning, transfer- ring, and enforcing common application behavior. The prototype is up to five times faster than existing random testing frameworks and tests functions that are hard to cover by non-trained approaches.Moderne interaktive Anwendungen bieten so viele Interaktionsmöglichkeiten, dass eine vollstĂ€ndige automatische Exploration und das Testen aller Szenarien praktisch unmöglich ist. Stattdessen muss die Testprozedur auf relevante KernfunktionalitĂ€t ausgerichtet werden. Diese Arbeit stellt ein neues fundamentales Testprinzip genannt thematisches Testen vor, das beliebige Anwendungen u ̈ber die graphische OberflĂ€che testet. Wir untersuchen die semantische Bedeutung von interagierbaren Elementen um die Kernfunktionenen von Anwendungen zu identifizieren und entsprechende Tests zu erzeugen. Statt typischen starren Testinstruktionen orientiert sich diese Art von Tests an menschlichen AnwendungsfĂ€llen in natĂŒrlicher Sprache. Dies erlaubt es, Software Spezifikationen zu erlernen und Wissen von einer Anwendung auf andere zu ĂŒbertragen unabhĂ€ngig von der Anwendungsart, der Programmiersprache, dem TestgerĂ€t oder der -Plattform. Nach unserem Kenntnisstand ist unser Ansatz der Erste dieser Art. Wir prĂ€sentieren ATTABOY, ein Programm, das eine existierende Testsammlung fĂŒr eine Webanwendung (z.B. fĂŒr Amazon) nimmt und in einer beliebigen anderen Anwendung (sagen wir ebay) ausfĂŒhrt. Dadurch werden Tests fĂŒr Kernfunktionen generiert. Bei der ersten AusfĂŒhrung auf Anwendungen aus den DomĂ€nen Online Shopping, Nachrichtenseiten und eMail, erzeugt der Prototyp sechzig Prozent der Tests automatisch. Ohne zusĂ€tzlichen manuellen Aufwand. DarĂŒber hinaus interpretiert themen- getriebenes Testen auch vage Anweisungen beispielsweise von How-to Anleitungen oder Anwendungsbeschreibungen. Eine Anweisung wie "FĂŒgen Sie das Produkt in den Warenkorb hinzu" testet das entsprechende Verhalten in der Anwendung. Sowohl im Browser, als auch in einer mobilen Anwendung. Die erzeugten Tests sind robuster und effektiver als vergleichbar erzeugte Tests. Der Prototyp testet die ZielfunktionalitĂ€t fĂŒnf mal schneller und testet dabei Funktionen die durch nicht spezialisierte AnsĂ€tze kaum zu erreichen sind
    • 

    corecore