1,208 research outputs found

    THE USE OF RECOMMENDER SYSTEMS IN WEB APPLICATIONS – THE TROI CASE

    Get PDF
    Avoiding digital marketing, surveys, reviews and online users behavior approaches on digital age are the key elements for a powerful businesses to fail, there are some systems that should preceded some artificial intelligence techniques. In this direction, the use of data mining for recommending relevant items as a new state of the art technique is increasing user satisfaction as well as the business revenues. And other related information gathering approaches in order to our systems thing and acts like humans. To do so there is a Recommender System that will be elaborated in this thesis. How people interact, how to calculate accurately and identify what people like or dislike based on their online previous behaviors. The thesis includes also the methodologies recommender system uses, how math equations helps Recommender Systems to calculate user’s behavior and similarities. The filters are important on Recommender System, explaining if similar users like the same product or item, which is the probability of neighbor user to like also. Here comes collaborative filters, neighborhood filters, hybrid recommender system with the use of various algorithms the Recommender Systems has the ability to predict whether a particular user would prefer an item or not, based on the user’s profile and their activities. The use of Recommender Systems are beneficial to both service providers and users. Thesis cover also the strength and weaknesses of Recommender Systems and how involving Ontology can improve it. Ontology-based methods can be used to reduce problems that content-based recommender systems are known to suffer from. Based on Kosovar’s GDP and youngsters job perspectives are desirable for improvements, the demand is greater than the offer. I thought of building an intelligence system that will be making easier for Kosovars to find the appropriate job that suits their profile, skills, knowledge, character and locations. And that system is called TROI Search engine that indexes and merge all local operating job seeking websites in one platform with intelligence features. Thesis will present the design, implementation, testing and evaluation of a TROI search engine. Testing is done by getting user experiments while using running environment of TROI search engine. Results show that the functionality of the recommender system is satisfactory and helpful

    state of the art analysis ; working packages in project phase II

    Get PDF
    In this report, we introduce our goals and present our requirement analysis for the second phase of the Corporate Semantic Web project. Corporate ontology engineering will improve the facilitation of agile ontology engineering to lessen the costs of ontology development and, especially, maintenance. Corporate semantic collaboration focuses the human-centered aspects of knowledge management in corporate contexts. Corporate semantic search is settled on the highest application level of the three research areas and at that point it is a representative for applications working on and with the appropriately represented and delivered background knowledge

    DataOps as a Prerequisite for the Next Level of Self-Service Analytics – Balancing User Agency and Central Control

    Get PDF
    The area of Business Intelligence and Analytics (BIA) has repeatedly oscillated between more central, efficiency-oriented, professionalized approaches and decentral, agility-oriented, user-driven ones. We investigate whether and how to alleviate that tradeoff by combining an agility-oriented self-service BIA approach with the professionalization-driven DataOps concept: DataOps aims at transferring ideas from DevOps to the realm of analytics, namely a mutual integration of Development and Operations and a high degree of professionalization and automation. From a case study with a series of interviews and a workshop we generate insights into the viability of such a combination. Our results inspire a theoretical concept for capturing the economics behind the approaches that is considering the (opportunity) costs of the components “user agency” and “central control”. The concept has been evaluated with representatives from the case study. Based on our results, we argue that the discussed combination can push BIA solutions towards fine-tuned federated environments

    Perceptions of ICT practitioners regarding software privacy

    Get PDF
    During software development activities, it is important for Information and Communication Technology (ICT) practitioners to know and understand practices and guidelines regarding information privacy, as software requirements must comply with data privacy laws and members of development teams should know current legislation related to the protection of personal data. In order to gain a better understanding on how industry ICT practitioners perceive the practical relevance of software privacy and privacy requirements and how these professionals are implementing data privacy concepts, we conducted a survey with ICT practitioners from software development organizations to get an overview of how these professionals are implementing data privacy concepts during software design. We performed a systematic literature review to identify related works with software privacy and privacy requirements and what methodologies and techniques are used to specify them. In addition, we conducted a survey with ICT practitioners from different organizations. Findings revealed that ICT practitioners lack a comprehensive knowledge of software privacy and privacy requirements and the Brazilian General Data Protection Law (Lei Geral de Proteção de Dados Pessoais, LGPD, in Portuguese), nor they are able to work with the laws and guidelines governing data privacy. Organizations are demanded to define an approach to contextualize ICT practitioners with the importance of knowledge of software privacy and privacy requirements, as well as to address them during software development, since LGPD must change the way teams work, as a number of features and controls regarding consent, documentation, and privacy accountability will be required

    Digital technologies catalyzing business model innovation in supply chain management - the case of parcel lockers as a solution for improving sustainable city mobility

    Get PDF
    The rise of information technologies pushes companies into digital restructuring. Organizations integrating emerging technologies into their supply chains can boost efficiency by streamlining processes and making more informed decisions using predictive analytics. This research dis-cusses major enablers for digital transformation and presents the application of those along different parts of a digital supply chain, while focusing on technical characteristics, implementations, and impact on organizational capabilities and strategies. The parcel lockers are a technology that sustains and improves last-mile delivery. By combining it with night-time delivery improves the City's Sustainable Mobility and, therefore, reduces the local emissions and city congestion

    Data-Driven Decisions and Actions in Today’s Software Development

    Full text link
    Today’s software development is all about data: data about the software product itself, about the process and its different stages, about the customers and markets, about the development, the testing, the integration, the deployment, or the runtime aspects in the cloud. We use static and dynamic data of various kinds and quantities to analyze market feedback, feature impact, code quality, architectural design alternatives, or effects of performance optimizations. Development environments are no longer limited to IDEs in a desktop application or the like but span the Internet using live programming environments such as Cloud9 or large-volume repositories such as BitBucket, GitHub, GitLab, or StackOverflow. Software development has become “live” in the cloud, be it the coding, the testing, or the experimentation with different product options on the Internet. The inherent complexity puts a further burden on developers, since they need to stay alert when constantly switching between tasks in different phases. Research has been analyzing the development process, its data and stakeholders, for decades and is working on various tools that can help developers in their daily tasks to improve the quality of their work and their productivity. In this chapter, we critically reflect on the challenges faced by developers in a typical release cycle, identify inherent problems of the individual phases, and present the current state of the research that can help overcome these issues

    Data management and Data Pipelines: An empirical investigation in the embedded systems domain

    Get PDF
    Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines

    AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0

    Get PDF
    The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity beyond jobs and growth. HCAI presents new objectives that were unreachable by either humans or machines alone, but this also comes with a new set of challenges. Our proposed method accomplishes this through the knowlEdge architecture, which enables human operators to implement AI solutions using a zero-touch framework. It relies on containerized AI model training and execution, supported by a robust data pipeline and rounded off with human feedback and evaluation interfaces. The result is a platform built from a number of components, spanning all major areas of the AI lifecycle. We outline both the architectural concepts and implementation guidelines and explain how they advance HCAI systems and Industry 5.0. In this article, we address the problems we encountered while implementing the ideas within the edge-to-cloud continuum. Further improvements to our approach may enhance the use of AI in Industry 5.0 and strengthen trust in AI systems
    • 

    corecore