18,984 research outputs found

    A Multi-channel Application Framework for Customer Care Service Using Best-First Search Technique

    Get PDF
    It has become imperative to find a solution to the dissatisfaction in response by mobile service providers when interacting with their customer care centres. Problems faced with Human to Human Interaction (H2H) between customer care centres and their customers include delayed response time, inconsistent solutions to questions or enquires and lack of dedicated access channels for interaction with customer care centres in some cases. This paper presents a framework and development techniques for a multi-channel application providing Human to System (H2S) interaction for customer care centre of a mobile telecommunication provider. The proposed solution is called Interactive Customer Service Agent (ICSA). Based on single-authoring, it will provide three media of interaction with the customer care centre of a mobile telecommunication operator: voice, phone and web browsing. A mathematical search technique called Best-First Search to generate accurate results in a search environmen

    XSRL: An XML web-services request language

    Get PDF
    One of the most serious challenges that web-service enabled e-marketplaces face is the lack of formal support for expressing service requests against UDDI-resident web-services in order to solve a complex business problem. In this paper we present a web-service request language (XSRL) developed on the basis of AI planning and the XML database query language XQuery. This framework is designed to handle and execute XSRL requests and is capable of performing planning actions under uncertainty on the basis of refinement and revision as new service-related information is accumulated (via interaction with the user or UDDI) and as execution circumstances necessitate change

    Social media analytics: a survey of techniques, tools and platforms

    Get PDF
    This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing

    Remote control of devices using an 8-bit embedded XML & dynamic web-server in a SmartHouse environment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University

    Get PDF
    This paper focuses on an Embedded System known as "TCP/IC" and its role in the "house of the future" - the SmartHouse. Overall, the aim of the TCP/IC was to design a device which could interact with a user (or AI control system) and allow for the control of various attached peripherals remotely. Although such a device could well be used as a standalone device to aid in home-automation, this paper focuses on its use in a SmartHouse environment - one where a number of these devices are networked and controlled by a central AI. The different technologies and protocols involved in the implementation of the TCP/IC, along with its two primary interfaces, namely HTML (used for user interaction) and XML (used for machine interaction) are also discussed. The reader will also be introduced to Embedded Systems and the various design principles involved in the creation of quality Embedded Systems. Core-concepts of home-automation and its logical extension, the SmartHouse are also covered in detail. Various additional interfaces (e.g. Web, XML, custom-formatted text) are also discussed and compared, as are the result of my work and some ideas for future implementations

    Semantic web service automation with lightweight annotations

    Get PDF
    Web services, both RESTful and WSDL-based, are an increasingly important part of the Web. With the application of semantic technologies, we can achieve automation of the use of those services. In this paper, we present WSMO-Lite and MicroWSMO, two related lightweight approaches to semantic Web service description, evolved from the WSMO framework. WSMO-Lite uses SAWSDL to annotate WSDL-based services, whereas MicroWSMO uses the hRESTS microformat to annotate RESTful APIs and services. Both frameworks share an ontology for service semantics together with most of automation algorithms

    Pathways: Augmenting interoperability across scholarly repositories

    Full text link
    In the emerging eScience environment, repositories of papers, datasets, software, etc., should be the foundation of a global and natively-digital scholarly communications system. The current infrastructure falls far short of this goal. Cross-repository interoperability must be augmented to support the many workflows and value-chains involved in scholarly communication. This will not be achieved through the promotion of single repository architecture or content representation, but instead requires an interoperability framework to connect the many heterogeneous systems that will exist. We present a simple data model and service architecture that augments repository interoperability to enable scholarly value-chains to be implemented. We describe an experiment that demonstrates how the proposed infrastructure can be deployed to implement the workflow involved in the creation of an overlay journal over several different repository systems (Fedora, aDORe, DSpace and arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries special issue on Digital Libraries and eScienc
    corecore