203 research outputs found

    The value and structuring role of web APIs in digital innovation ecosystems: the case of the online travel ecosystem

    Get PDF
    Interfaces play a key role in facilitating the integration of external sources of innovation and structuring ecosystems. They have been conceptualized as design rules that ensure the interoperability of independently produced modules, with important strategic value for lead firms to attract and control access to complementary assets in platform ecosystems. While meaningful, these theorizations do not fully capture the value and structuring role of web APIs in digital innovation ecosystems. We show this with an empirical study of the online travel ecosystem in the 26 years (1995–2021) after the first Online Travel Agencies (OTAs) were launched. Our findings reveal that web APIs foster a dynamic digital innovation ecosystem with a distributed networked structure in which multiple actors design and use them. We provide evidence of an ecosystem where decentralized interfaces enable decentralized governance and where interfaces establish not only cooperative relationships, but also competitive ones. Instead of locking in complementors, web APIs enable the integration of capabilities from multiple organizations for the co-production of services and products, by interfacing their information systems. Web APIs are important sources of value creation and capture, increasingly being used to offer or sell services, constituting important sources of revenue

    A BRIEF REVIEW ON THE ADVANTAGES, HINDRANCES AND ECONOMIC FEASIBILITY OF STIRLING ENGINES AS A DISTRIBUTED GENERATION SOURCE AND COGENERATION TECHNOLOGY

    Get PDF
    The present paper aims to provide a brief review of the potentiality and economic feasibility of the Stirling engine as a distributed generation source and cogeneration technology. Another objective was the determination of hindrances which may be preventing the feasibility of the Stirling technology. With these intentions, a research based on a combination of preselected keywords was performed at the Metasearch of CAPES (Brazil's Higher Education Coordination of Personnel Perfecting). No filters in relation to the research period or to particular geographical regions were employed, thus publications until 2017’s middle were included and the research was conducted on a global level. Next, the selection of papers which contained some of the keywords was made, consisting initially of the read of the publications’ abstracts. The remaining ones were then further explored and had their relevant information incorporated, according to the scope of this work. It is worth mentioning that other accredited sources which dealt with important aspects of the topic were also included. Furthermore, a table containing some examples of products concerning the application of the Stirling engine as a distributed generation and cogeneration technology is presented. Ultimately, it is concluded that the Stirling technology, despite its advantages and suitability regarding the proposed applications, is not yet commercially feasible, having currently only a minor presence in the market. This scenario can be attributed to the need for further research and technical development as well as cost reduction

    EBSLG Annual General Conference, 18. - 21.05.2010, Cologne. Selected papers

    Get PDF
    Am 18.-21. Mai 2010 fand in der Universitäts- und Stadtbibliothek (USB) Köln die „Annual General Conference“ der European Business Schools Librarians Group (EBSLG) statt. Die EBSLG ist eine relativ kleine, aber exklusive Gruppe von Bibliotheksdirektorinnen und –direktoren bzw. Bibliothekarinnen und Bibliothekaren in Leitungspositionen aus den Bibliotheken führender Business Schools. Im Mittelpunkt der Tagung standen zwei Themenschwerpunkte: Der erste Themenkreis beschäftigte sich mit Bibliotheksportalen und bibliothekarischen Suchmaschinen. Der zweite Themenschwerpunkt Fragen der Bibliotheksorganisation wie die Aufbauorganisation einer Bibliothek, Outsourcing und Relationship Management. Der vorliegende Tagungsband enthält ausgewählte Tagungsbeiträge

    Study of result presentation and interaction for aggregated search

    Get PDF
    The World Wide Web has always attracted researchers and commercial search engine companies due to the enormous amount of information available on it. "Searching" on web has become an integral part of today's world, and many people rely on it when looking for information. The amount and the diversity of information available on the Web has also increased dramatically. Due to which, the researchers and the search engine companies are making constant efforts in order to make this information accessible to the people effectively. Not only there is an increase in the amount and diversity of information available online, users are now often seeking information on broader topics. Users seeking information on broad topics, gather information from various information sources (e.g, image, video, news, blog, etc). For such information requests, not only web results but results from different document genre and multimedia contents are also becoming relevant. For instance, users' looking for information on "Glasgow" might be interested in web results about Glasgow, Map of Glasgow, Images of Glasgow, News of Glasgow, and so on. Aggregated search aims to provide access to this diverse information in a unified manner by aggregating results from different information sources on a single result page. Hence making information gathering process easier for broad topics. This thesis aims to explore the aggregated search from the users' perspective. The thesis first and foremost focuses on understanding and describing the phenomena related to the users' search process in the context of the aggregated search. The goal is to participate in building theories and in understanding constraints, as well as providing insights into the interface design space. In building this understanding, the thesis focuses on the click-behavior, information need, source relevance, dynamics of search intents. The understanding comes partly from conducting users studies and, from analyzing search engine log data. While the thematic (or topical) relevance of documents is important, this thesis argues that the "source type" (source-orientation) may also be an important dimension in the relevance space for investigating in aggregated search. Therefore, relevance is multi-dimensional (topical and source-orientated) within the context of aggregated search. Results from the study suggest that the effect of the source-orientation was a significant factor in an aggregated search scenario. Hence adds another dimension to the relevance space within the aggregated search scenario. The thesis further presents an effective method which combines rule base and machine learning techniques to identify source-orientation behind a user query. Furthermore, after analyzing log-data from a search engine company and conducting user study experiments, several design issues that may arise with respect to the aggregated search interface are identified. In order to address these issues, suitable design guidelines that can be beneficial from the interface perspective are also suggested. To conclude, aim of this thesis is to explore the emerging aggregated search from users' perspective, since it is a very important for front-end technologies. An additional goal is to provide empirical evidence for influence of aggregated search on users searching behavior, and identify some of the key challenges of aggregated search. During this work several aspects of aggregated search will be uncovered. Furthermore, this thesis will provide a foundations for future research in aggregated search and will highlight the potential research directions

    Department of Computer Science Activity 1998-2004

    Get PDF
    This report summarizes much of the research and teaching activity of the Department of Computer Science at Dartmouth College between late 1998 and late 2004. The material for this report was collected as part of the final report for NSF Institutional Infrastructure award EIA-9802068, which funded equipment and technical staff during that six-year period. This equipment and staff supported essentially all of the department\u27s research activity during that period

    On-line Metasearch, Pooling, and System Evaluation

    Get PDF
    This thesis presents a unified method for simultaneous solution of three problems in Information Retrieval--- metasearch (the fusion of ranked lists returned by retrieval systems to elicit improved performance), efficient system evaluation (the accurate evaluation of retrieval systems with small numbers of relevance judgements), and pooling or ``active sample selection (the selection of documents for manual judgement in order to develop sample pools of high precision or pools suitable for assessing system quality). The thesis establishes a unified theoretical framework for addressing these three problems and naturally generalizes their solution to the on-line context by incorporating feedback in the form of relevance judgements. The algorithm--- Rankhedge for on-line retrieval, metasearch and system evaluation--- is the first to address these three problems simultaneously and also to generalize their solution to the on-line context. Optimality of the Rankhedge algorithm is developed via Bayesian and maximum entropy interpretations. Results of the algorithm prove to be significantly superior to previous methods when tested over a range of TREC (Text REtrieval Conference) data. In the absence of feedback, the technique equals or exceeds the performance of benchmark metasearch algorithms such as CombMNZ and Condorcet. The technique then dramatically improves on this performance during the on-line metasearch process. In addition, the technique generates pools of documents which include more relevant documents and produce more accurate system evaluations than previous techniques. The thesis includes an information-theoretic examination of the original Hedge algorithm as well as its adaptation to the context of ranked lists. The work also addresses the concept of information-theoretic similarity within the Rankhedge context and presents a method for decorrelating the predictor set to improve worst case performance. Finally, an information-theoretically optimal method for probabilistic ``active sampling is presented with possible application to a broad range of practical and theoretical contexts

    Training by Projects in an Industrial Robotic Application

    Get PDF
    This chapter presents a case study of learning environments that generated a technical description of the reconditioning and commissioning of an industrial robotic arm, specifically from electronic control, mechanical design, and its application in kinematics and programming, as a pedagogical tool that powers education training. Topics are developed in a didactic way in the research hotbed such as the technology implemented to recondition the arm, the modifications that were made in terms of electrical and electronic capabilities, the analysis of the initial state of the existing electrical elements, the new devices to be implemented, and necessary calculations for the reconstruction and adaptation of the arm from the electro-mechanical point of view. It is actually the best way to promote research in the training of the student in the classroom, taking the initiative to access knowledge with the guidance of the teacher to understand the information related to the problem to be solved. The project method allows me to strengthen learning and especially the construction of knowledge of the dual relationship, SENA academy—business and theory—practice, as a training model

    Online Data Cleaning

    Get PDF
    Data-centric applications have never been more ubiquitous in our lives, e.g., search engines, route navigation and social media. This has brought along a new age where digital data is at the core of many decisions we make as individuals, e.g., looking for the most scenic route to plan a road trip, or as professionals, e.g., analysing customers’ transactions to predict the best time to restock different products. However, the surge in data generation has also led to creating massive amounts of dirty data, i.e., inaccurate or redundant data. Using dirty data to inform business decisions comes with dire consequences, for instance, an IBM report estimates that dirty data costs the U.S. $3.1 trillion a year. Dirty data is the product of many factors which include data entry errors and integration of several data sources. Data integration of multiple sources is especially prone to producing dirty data. For instance, while individual sources may not have redundant data, they often carry redundant data across each other. Furthermore, different data sources may obey different business rules (sometimes not even known) which makes it challenging to reconcile the integrated data. Even if the data is clean at the time of the integration, data updates would compromise its quality over time. There is a wide spectrum of errors that can be found in the data, e,g, duplicate records, missing values, obsolete data, etc. To address these problems, several data cleaning efforts have been proposed, e.g., record linkage to identify duplicate records, data fusion to fuse duplicate data items into a single representation and enforcing integrity constraints on the data. However, most existing efforts make two key assumptions: (1) Data cleaning is done in one shot; and (2) The data is available in its entirety. Those two assumptions do not hold in our age where data is highly volatile and integrated from several sources. This calls for a paradigm shift in approaching data cleaning: it has to be made iterative where data comes in chunks and not all at once. Consequently, cleaning the data should not be repeated from scratch whenever the data changes, but instead, should be done only for data items affected by the updates. Moreover, the repair should be computed effciently to support applications where cleaning is performed online (e.g. query time data cleaning). In this dissertation, we present several proposals to realize this paradigm for two major types of data errors: duplicates and integrity constraint violations. We frst present a framework that supports online record linkage and fusion over Web databases. Our system processes queries posted to Web databases. Query results are deduplicated, fused and then stored in a cache for future reference. The cache is updated iteratively with new query results. This effort makes it possible to perform record linkage and fusion effciently, but also effectively, i.e., the cache contains data items seen in previous queries which are jointly cleaned with incoming query results. To address integrity constraints violations, we propose a novel way to approach Functional Dependency repairs, develop a new class of repairs and then demonstrate it is superior to existing efforts, in runtime and accuracy. We then show how our framework can be easily tuned to work iteratively to support online applications. We implement a proof-ofconcept query answering system to demonstrate the iterative capability of our system

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
    corecore