32,974 research outputs found

    Cognitively-inspired Agent-based Service Composition for Mobile & Pervasive Computing

    Full text link
    Automatic service composition in mobile and pervasive computing faces many challenges due to the complex and highly dynamic nature of the environment. Common approaches consider service composition as a decision problem whose solution is usually addressed from optimization perspectives which are not feasible in practice due to the intractability of the problem, limited computational resources of smart devices, service host's mobility, and time constraints to tailor composition plans. Thus, our main contribution is the development of a cognitively-inspired agent-based service composition model focused on bounded rationality rather than optimality, which allows the system to compensate for limited resources by selectively filtering out continuous streams of data. Our approach exhibits features such as distributedness, modularity, emergent global functionality, and robustness, which endow it with capabilities to perform decentralized service composition by orchestrating manifold service providers and conflicting goals from multiple users. The evaluation of our approach shows promising results when compared against state-of-the-art service composition models.Comment: This paper will appear on AIMS'19 (International Conference on Artificial Intelligence and Mobile Services) on June 2

    The Size Conundrum: Why Online Knowledge Markets Can Fail at Scale

    Full text link
    In this paper, we interpret the community question answering websites on the StackExchange platform as knowledge markets, and analyze how and why these markets can fail at scale. A knowledge market framing allows site operators to reason about market failures, and to design policies to prevent them. Our goal is to provide insights on large-scale knowledge market failures through an interpretable model. We explore a set of interpretable economic production models on a large empirical dataset to analyze the dynamics of content generation in knowledge markets. Amongst these, the Cobb-Douglas model best explains empirical data and provides an intuitive explanation for content generation through concepts of elasticity and diminishing returns. Content generation depends on user participation and also on how specific types of content (e.g. answers) depends on other types (e.g. questions). We show that these factors of content generation have constant elasticity---a percentage increase in any of the inputs leads to a constant percentage increase in the output. Furthermore, markets exhibit diminishing returns---the marginal output decreases as the input is incrementally increased. Knowledge markets also vary on their returns to scale---the increase in output resulting from a proportionate increase in all inputs. Importantly, many knowledge markets exhibit diseconomies of scale---measures of market health (e.g., the percentage of questions with an accepted answer) decrease as a function of number of participants. The implications of our work are two-fold: site operators ought to design incentives as a function of system size (number of participants); the market lens should shed insight into complex dependencies amongst different content types and participant actions in general social networks.Comment: The 27th International Conference on World Wide Web (WWW), 201

    A peer-to-peer infrastructure for resilient web services

    Get PDF
    This work is funded by GR/M78403 “Supporting Internet Computation in Arbitrary Geographical Locations” and GR/R51872 “Reflective Application Framework for Distributed Architectures”, and by Nuffield Grant URB/01597/G “Peer-to-Peer Infrastructure for Autonomic Storage Architectures”This paper describes an infrastructure for the deployment and use of Web Services that are resilient to the failure of the nodes that host those services. The infrastructure presents a single interface that provides mechanisms for users to publish services and to find hosted services. The infrastructure supports the autonomic deployment of services and the brokerage of hosts on which services may be deployed. Once deployed, services are autonomically managed in a number of aspects including load balancing, availability, failure detection and recovery, and lifetime management. Services are published and deployed with associated metadata describing the service type. This same metadata may be used subsequently by interested parties to discover services. The infrastructure uses peer-to-peer (P2P) overlay technologies to abstract over the underlying network to deploy and locate instances of those services. It takes advantage of the P2P network to replicate directory services used to locate service instances (for using a service), Service Hosts (for deployment of services) and Autonomic Managers which manage the deployed services. The P2P overlay network is itself constructed using novel Web Services-based middleware and a variation of the Chord P2P protocol, which is self-managing.Postprin

    Do System Test Cases Grow Old?

    Full text link
    Companies increasingly use either manual or automated system testing to ensure the quality of their software products. As a system evolves and is extended with new features the test suite also typically grows as new test cases are added. To ensure software quality throughout this process the test suite is continously executed, often on a daily basis. It seems likely that newly added tests would be more likely to fail than older tests but this has not been investigated in any detail on large-scale, industrial software systems. Also it is not clear which methods should be used to conduct such an analysis. This paper proposes three main concepts that can be used to investigate aging effects in the use and failure behavior of system test cases: test case activation curves, test case hazard curves, and test case half-life. To evaluate these concepts and the type of analysis they enable we apply them on an industrial software system containing more than one million lines of code. The data sets comes from a total of 1,620 system test cases executed a total of more than half a million times over a time period of two and a half years. For the investigated system we find that system test cases stay active as they age but really do grow old; they go through an infant mortality phase with higher failure rates which then decline over time. The test case half-life is between 5 to 12 months for the two studied data sets.Comment: Updated with nicer figs without border around the

    Upgrade of the CEDIT database of earthquake-induced ground effects in Italy

    Get PDF
    The database related to the Italian Catalogue of EarthquakeInduced Ground Failures (CEDIT), was recently upgraded and updated to 2017 in the frame of a work-in-progress focused on the following issues: i) reorganization of the geo-database architecture; ii) revision of the earthquake parameters from the CFTI5 e CPTI15 catalogues by INGV; ii) addition of new data on effects induced by earthquakes occurred from 2009 to 2017; iv) attribution of macroseismic intensity value to each effect site, according to the CFTI5 e CPTI15 catalogues by INGV. The revised CEDIT database aims at achieving: i) the optimization of the CEDIT catalogue in order to increase its usefulness for both Public Institutions and individual users; ii) a new architecture of the geo-database in view of a future implementation of the online catalogue which implies its usability via web-app also to support post-event detection and surveying activities. Here we illustrate the new geo-database design and discuss the statistics that can be derived from the updated database. Statistical analysis was carried out on the data recorded in the last update of CEDIT to 2017 and compared with the analysis of the previous update outline that: - the most represented ground effects are the landslides with a percentage of 55% followed by ground cracks with a percentage of 23%; - the MCS intensity (IMCS) distribution of the effect sites shows a maximum in correspondence of the IMCS class 8 even if a second frequency peak appears in the IMCS class 7 only for surface faulting effects; - the distribution of the effects according to the epicentral distance shows a decrease for all the typologies of induced ground effects with increasing epicentral distance
    • 

    corecore