4,855 research outputs found

    Quantitative Analyses in Digital Marketing and Business Intelligence

    Get PDF
    This work is divided into two parts. The first part consists of four essays on questions in digital marketing; this term refers to all marketing activities on the Internet, regardless of whether they primarily address users of stationary devices (e.g., a desktop PC) or users of mobile devices (e.g., a smartphone). In Essay I, we model the time it takes until an item that is offered in the popular buy-it-now offer format is sold. Our model allows drawing inference from the observation of this time on how many consumers are interested in the item and on how much they value it. By this approach, several problems can be bypassed that often arise when these factors are estimated from data on items that are offered in an auction. We demonstrate the application of our model by an example. Essay II investigates which effects ads that are displayed on search engine results pages have on the click behavior and the purchase behavior of users. For this purpose, a model and a corresponding decision rule are developed and applied to a dataset that we have obtained in a field experiment. The results show that search engine advertising can be beneficial even for search queries for which the website of the advertising firm already ranks high among the regular, so-called organic search results, and even for users who already search with one of the firm’s brand names. In Essay III, we argue theoretically and show empirically that online product ratings by customers do not represent the rated product’s quality, as it has been assumed in previous studies, but rather the customers’ satisfaction with the product. Customer satisfaction does not only depend on product quality as observed after the purchase but also on the expectations the customers had of the product before the purchase. Essay IV investigates the relationship between the offline and the mobile content delivery channel. For this purpose, we study whether a publisher can retain existing subscribers to a print medium longer if he offers a mobile app through which a digital version of the print medium can be accessed. The application of our model to the case of a respected German daily newspaper confirms the existence of such an effect, which indicates a complementary relationship between the two content delivery channels. We analyze how this relationship affects the value of a customer to the publisher. The second part of this work consists of three essays that explore various approaches for simplifying the use of business intelligence (BI) systems. The necessity of such a simplification is emphasized by the fact that BI systems are nowadays employed for the analysis of more and more heterogeneous data than in the past, especially transactional data. This has also extended their audience, which now also includes inexperienced knowledge workers. Essay V analyzes by an experiment that we have conducted among knowledge workers from different firms how the presentation of data in a BI system affects how fast and how accurate the system users answer typical tasks. With regard to this, we compare the three currently most common data models: the multidimensional one, the relational one, and the flat one. The results show that it depends on the type of the task considered which of these data models supports users best. In Essay VI, a framework for the integration of an archiving component into a BI system is developed. Such a component can identify and automatically archive reports that have become irrelevant. This is in order to reduce the system users’ effort associated with searching for relevant reports. We show by a simulation study that the proposed approach of estimating the reports’ future relevance from the log files of the BI system’s search component (and other data) is suitable for this purpose. In Essay VII, we develop a reference algorithm for searching documents in a firm context (such as reports in a BI system). Our algorithm combines aspects of several search paradigms and can easily be adapted by firms to their specificities. We evaluate an instance of our algorithm by an experiment; the results show that it outperforms traditional algorithms with regard to several measures. The work begins with a synopsis that gives further details on the essays

    The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms

    Get PDF
    Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version

    Regional Data Archiving and Management for Northeast Illinois

    Get PDF
    This project studies the feasibility and implementation options for establishing a regional data archiving system to help monitor and manage traffic operations and planning for the northeastern Illinois region. It aims to provide a clear guidance to the regional transportation agencies, from both technical and business perspectives, about building such a comprehensive transportation information system. Several implementation alternatives are identified and analyzed. This research is carried out in three phases. In the first phase, existing documents related to ITS deployments in the broader Chicago area are summarized, and a thorough review is conducted of similar systems across the country. Various stakeholders are interviewed to collect information on all data elements that they store, including the format, system, and granularity. Their perception of a data archive system, such as potential benefits and costs, is also surveyed. In the second phase, a conceptual design of the database is developed. This conceptual design includes system architecture, functional modules, user interfaces, and examples of usage. In the last phase, the possible business models for the archive system to sustain itself are reviewed. We estimate initial capital and recurring operational/maintenance costs for the system based on realistic information on the hardware, software, labor, and resource requirements. We also identify possible revenue opportunities. A few implementation options for the archive system are summarized in this report; namely: 1. System hosted by a partnering agency 2. System contracted to a university 3. System contracted to a national laboratory 4. System outsourced to a service provider The costs, advantages and disadvantages for each of these recommended options are also provided.ICT-R27-22published or submitted for publicationis peer reviewe

    Web Data Extraction, Applications and Techniques: A Survey

    Full text link
    Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction. This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.Comment: Knowledge-based System

    Towards software performance monitoring: an approach for the aerospace industry

    Get PDF
    Software applications are becoming one of the most valuable assets for companies, providing critical capabilities and functionalities to perform a wide range of operations in the industry. This paper aims to provide a view on software application portfolio monitoring and its integration into business intelligence systems for aerospace manufacturing companies. The key research question addressed is how critical software has become for aerospace industry and how software applications could be monitored. This question has been addressed by conducting an in depth review of current literature and by interviewing professionals from different aerospace companies. The results are a set of key findings regarding software impact in aerospace industry, and a monitoring proposal based in a traditional business intelligence architecture. By incorporating condition monitoring methodologies into the software application portfolio of the enterprise, benefits in maintenance budget allocation and risk avoidance are expected, thanks to a more precise and agile way of processing business data. Additional savings should be possible through further application portfolio optimisation

    An Ontological Approach to Representing the Product Life Cycle

    Get PDF
    The ability to access and share data is key to optimizing and streamlining any industrial production process. Unfortunately, the manufacturing industry is stymied by a lack of interoperability among the systems by which data are produced and managed, and this is true both within and across organizations. In this paper, we describe our work to address this problem through the creation of a suite of modular ontologies representing the product life cycle and its successive phases, from design to end of life. We call this suite the Product Life Cycle (PLC) Ontologies. The suite extends proximately from The Common Core Ontologies (CCO) used widely in defense and intelligence circles, and ultimately from the Basic Formal Ontology (BFO), which serves as top level ontology for the CCO and for some 300 further ontologies. The PLC Ontologies were developed together, but they have been factored to cover particular domains such as design, manufacturing processes, and tools. We argue that these ontologies, when used together with standard public domain alignment and browsing tools created within the context of the Semantic Web, may offer a low-cost approach to solving increasingly costly problems of data management in the manufacturing industry

    BlogForever D2.6: Data Extraction Methodology

    Get PDF
    This report outlines an inquiry into the area of web data extraction, conducted within the context of blog preservation. The report reviews theoretical advances and practical developments for implementing data extraction. The inquiry is extended through an experiment that demonstrates the effectiveness and feasibility of implementing some of the suggested approaches. More specifically, the report discusses an approach based on unsupervised machine learning that employs the RSS feeds and HTML representations of blogs. It outlines the possibilities of extracting semantics available in blogs and demonstrates the benefits of exploiting available standards such as microformats and microdata. The report proceeds to propose a methodology for extracting and processing blog data to further inform the design and development of the BlogForever platform

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Impacts of electronic invoicing in the portuguese healthcare sector: potential savings on accounts payable

    Get PDF
    The manual processes for exchanging commercial and financial documents between an entity and its suppliers create inefficiencies in the supply chain. However, there are electronic solutions able to eliminate errors and decrease the costs in accounting departments. This work aims to study the potential savings of migrating from manual to electronic invoicing process in accounts payable operations, on the Portuguese healthcare system. Besides invoices, other commercial documents are taking into account during this process: purchase orders and transport guides. There are considered three suppliers’ types: medicines and medical devices, utilities and services. The study includes seven healthcare entities belonging to Portuguese Health Service (Centro Hospitalar de Lisboa Central, Centro Hospitalar de Lisboa Norte, Centro Hospitalar de Lisboa Ocidental, Centro Hospitalar do Porto, Hospital Professor Doutor Fernando Fonseca, Centro Hospitalar de TrĂĄs-os-Montes e Alto Douto) and two service providers (Indra and Saphety). Data was collected by two different questionnaires, sent by electronic mail to each entity. In order to migrate to an electronic process for receiving invoices, it is needed an Electronic Data Interchange technology. The results show that the Ministry of Health is able to save around €2,435,249, considering the seven entities, in the first year of implementation. In a horizon of three years, the potential savings are €8,435,249. In addition, the usage of electronic data interchange allows the Ministry of Health to use a business intelligence technology and create policies with a higher degree of accuracy.O processo manual de troca de documentos comerciais e financeiros, entre empresas e os seus fornecedores, cria ineficiĂȘncias na cadeia de abastecimento. No entanto, existem soluçÔes eletrĂłnicas capazes de eliminar erros e diminuir custos nos departamentos de contabilidade. Este trabalho tem como objetivo o estudo da potencial poupança originada pela migração do sistema de faturação manual para o eletrĂłnico, nas contas a pagar do sistema de saĂșde portuguĂȘs. AlĂ©m das faturas, sĂŁo tidos em conta outros documentos comerciais: as ordens de encomenda e as guias de transporte. SĂŁo considerados trĂȘs tipos de fornecedores: medicamentos e dispositivos mĂ©dicos, utilities e serviços. O estudo inclui sete entidades de saĂșde pertencentes ao Sistema Nacional de SaĂșde (Centro Hospitalar de Lisboa Central, Centro Hospitalar de Lisboa Norte, Centro Hospitalar de Lisboa Ocidental, Centro Hospitalar do Porto, Hospital Professor Doutor Fernando Fonseca, Centro Hospitalar de TrĂĄs-os-Montes e Alto Douto) e dois prestadores de serviços (Indra e Saphety). Os dados foram recolhidos atravĂ©s de dois questionĂĄrios diferentes, enviados, por correio eletrĂłnico, para cada entidade. De forma a migrar para um processo eletrĂłnico na receção de faturas, Ă© necessĂĄria a tecnologia electronic data interchange. Os resultados mostram que o MinistĂ©rio da SaĂșde Ă© capaz de poupar cerca de 2.435.249€, considerando as sete entidades, no primeiro ano de implementação. Num horizonte de trĂȘs anos, as potenciais poupanças sĂŁo 8.435.249€. Para alĂ©m disso, a utilização de electronic data interchange permite ao MinistĂ©rio da SaĂșde o uso da tecnologia business intelligence e criar polĂ­ticas com maior grau de precisĂŁo

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid
    • 

    corecore