121,304 research outputs found

    Designing And Implementing An Online WebGIS-Based Decision Support System

    Get PDF
    This paper focuses on providing a market analysis solution through designing and implementing an online decision-support system (DSS) for businesses decision makers in Tobacco industry in China. The procedure makes use of data, information and software from Web based Geographical Information Systems (GIS) to generate online analysis, mapping and visualisation systems. These procedures are integrated and synchronised with market analysis techniques and customer relationship management (CRM) systems. By integrating these two techniques, a webGIS-based tobacco market information system is presented to demonstrate the significance of WebGIS in market analysis field. Specifically, to meet the needs of market practitioners (retailer, distributor and industry authority) in understanding the current market and sales performance, the system is designed and mainly consisted of four functional components: Communication and administration, Current market analysis, CRM (Client Relationship Management) and Sales/customer analysis, and Operational issues. From the system design and system usage perspectives, the illustration on the system architecture and the process of marketing information transmission reveals the benefits raised from this E-commerce tool to both the system users and service provider in marketing analysis. Based on this, the fusion of technology enhancement and marketing strategy in business process are called for and discussed

    FORETELL: Aggregating Distributed, Heterogeneous Information from Diverse Sources Using Market-based Techniques

    Get PDF
    Predicting the outcome of uncertain events that will happen in the future is a frequently indulged task by humans while making critical decisions. The process underlying this prediction and decision making is called information aggregation, which deals with collating the opinions of different people, over time, about the future event’s possible outcome. The information aggregation problem is non-trivial as the information related to future events is distributed spatially and temporally, the information gets changed dynamically as related events happen, and, finally, people’s opinions about events’ outcomes depends on the information they have access to and the mechanism they use to form opinions from that information. This thesis addresses the problem of distributed information aggregation by building computational models and algorithms for different aspects of information aggregation so that the most likely outcome of future events can be predicted with utmost accuracy. We have employed a commonly used market-based framework called a prediction market to formally analyze the process of information aggregation. The behavior of humans performing information aggregation within a prediction market is implemented using software agents which employ sophisticated algorithms to perform complex calculations on behalf of the humans, to aggregate information efficiently. We have considered five different yet crucial problems related to information aggregation, which include: (i) the effect of variations in the parameters of the information being aggregated, such as its reliability, availability, accessibility, etc., on the predicted outcome of the event, (ii) improving the prediction accuracy by having each human (software-agent) build a more accurate model of other humans’ behavior in the prediction market, (iii) identifying how various market parameters effect its dynamics and accuracy, (iv) applying information aggregation to the domain of distributed sensor information fusion, and, (v) aggregating information on an event while considering dissimilar, but closely-related events in different prediction markets. We have verified all of our proposed techniques through analytical results and experiments while using commercially available data from real prediction markets within a simulated, multi-agent based prediction market. Our results show that our proposed techniques for information aggregation perform more efficiently or comparably with existing techniques for information aggregation using prediction markets

    Post processing of multimedia information - concepts, problems, and techniques

    Full text link
    Currently, most research work on multimedia information processing is focused on multimedia information storage and retrieval, especially indexing and content-based access of multimedia information. We consider multimedia information processing should include one more level-post-processing. Here &quot;post-processing&quot; means further processing of retrieved multimedia information, which includes fusion of multimedia information and reasoning with multimedia information to reach new conclusions. In this paper, the three levels of multimedia information processing storage, retrieval, and post-processing- are discussed. The concepts and problems of multimedia information post-processing are identified. Potential techniques that can be used in post-processing are suggested, By highlighting the problems in multimedia information post-processing, hopefully this paper will stimulate further research on this important but ignored topic.<br /

    Learning Sentence-internal Temporal Relations

    Get PDF
    In this paper we propose a data intensive approach for inferring sentence-internal temporal relations. Temporal inference is relevant for practical NLP applications which either extract or synthesize temporal information (e.g., summarisation, question answering). Our method bypasses the need for manual coding by exploiting the presence of markers like after", which overtly signal a temporal relation. We first show that models trained on main and subordinate clauses connected with a temporal marker achieve good performance on a pseudo-disambiguation task simulating temporal inference (during testing the temporal marker is treated as unseen and the models must select the right marker from a set of possible candidates). Secondly, we assess whether the proposed approach holds promise for the semi-automatic creation of temporal annotations. Specifically, we use a model trained on noisy and approximate data (i.e., main and subordinate clauses) to predict intra-sentential relations present in TimeBank, a corpus annotated rich temporal information. Our experiments compare and contrast several probabilistic models differing in their feature space, linguistic assumptions and data requirements. We evaluate performance against gold standard corpora and also against human subjects
    • …
    corecore