1,258 research outputs found
Intelligent Agents - a Tool for Modeling Intermediation and Negotiation Processes
Many contemporary problems as encountered in society and economy require advanced capabilities for evaluation of situations and alternatives and decision making, most of the time requiring intervention of human agents, experts in negotiation and intermediation. Moreover, many problems require the application of standard procedures and activities to carry out typical socio-economic processes (for example by employing standard auctions for procurement or supply of goods or convenient intermediation to access resources and information). This paper focuses on enhancing knowledge about intermediation and negotiation processes in order to improve quality of services and optimize performances of business agents, using new computational methods that combine formal methods with intelligent agents paradigm. Taking into account their modularity and extensibility, agent systems allow facile, standardized and seamless integration of negotiation protocols and strategies by employing declarative and formal representations specific to computer science.Business processes, Intelligent Agents, Intermediation and Negotiation, Formal Models.
Improving Access to Information through Conceptual Classification
Overwhelming the users with large amount of information on the Web has resulted in users' inability to find the information and their dissatisfaction with available information searching and filtering systems. On the other hand, the information is distributed over many websites and a large part of it (for example news) is updated frequently. Keeping track of the changes in huge amount of information is a real problem for users.   Due to the great impact the information has on people's lives and business decision-making, much research has been done on the efficient ways of accessing and analyzing the information. This thesis will propose a conceptual classification method and ranking of the information in order to provide better user access to a wider range of information, it also provides the information that may help in analyzing the global trends in various fields. In order to demonstrate the effectiveness of this method, a feed aggregator system has been developed and evaluated through this thesis.   To improve the flexibility and adaptability of the system, we have adopted the agent-oriented software engineering architecture that has also helped facilitating the development process. In addition, since the system deals with storing and processing large amounts of information, that requires a large number of resources the cloud platform service has been used as a platform for deploying the application. The result was a cloud based software service that benefited from the unlimited on-demand resources.   To take advantage of the available features of public cloud computing platforms, those supporting the agent-oriented design, the multi-agent system was implemented by mapping the agents to the cloud computing services. In addition, the cloud queue service that is provided by some cloud providers such as Microsoft and Amazon was used to implement indirect communication among the agents in the multi-agent system.  M.S
Personalized Text Categorization Using a MultiAgent Architecture
In this paper, a system able to retrieve contents deemed
relevant for the users through a text categorization process,
is presented. The system is built exploiting a generic
multiagent architecture that supports the implementation
of applications aimed at (i) retrieving heterogeneous data
spread among different sources (e.g., generic html pages,
news, blogs, forums, and databases); (ii) filtering and organizing
them according to personal interests explicitly stated
by each user; (iii) providing adaptation techniques to improve
and refine throughout time the profile of each selected
user. In particular, the implemented multiagent system creates
personalized press-revies from online newspapers. Preliminary
results are encouraging and highlight the effectiveness
of the approach
D4.6.1.1 Report on ontology mediation for case studies v.1
WP4 Ontology Mediation ReportThe aim of this deliverable is to identify the requirements for mediation for the SEKT casestudies. The data sources from each case study are investigated together with the relationships between them and with the scenarios in which two or more of these data sources are used in conjunction, i.e. where data integration is needed. The requirements for mediation are identified based on these scenarios. We should note that as a result of our analysis we identified the opportunity of some architectural changes for two of the casestudies. The new data source landscapes proposed together with guidelines about different mediation approaches should serve as a pillar for the further development of thecase studies. Also the identified requirements show that the main mediation functionality on which the tools developed by the WP4 should focus on is ontology alignment
Receiver-Oriented 'Pull' Model For ResettaNet Trade Documents Interchange.
A major challenge in integrating trading partners' processes is effective document interchange. Traditional business-to-business (B2B) process integration is based on a 'Push' model through which documents are pushed from the senders to the receivers, as in emailing and electronic data interchange
Geospatial database generation from digital newspapers: use case for risk and disaster domains.
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.The generation of geospatial databases is expensive in terms of time
and money. Many geospatial users still lack spatial data. Geographic
Information Extraction and Retrieval systems can alleviate this problem.
This work proposes a method to populate spatial databases automatically
from the Web. It applies the approach to the risk and disaster domain
taking digital newspapers as a data source. News stories on digital
newspapers contain rich thematic information that can be attached
to places. The use case of automating spatial database generation is
applied to Mexico using placenames. In Mexico, small and medium
disasters occur most years. The facts about these are frequently mentioned
in newspapers but rarely stored as records in national databases.
Therefore, it is difficult to estimate human and material losses of those
events.
This work present two ways to extract information from digital news
using natural languages techniques for distilling the text, and the national
gazetteer codes to achieve placename-attribute disambiguation.
Two outputs are presented; a general one that exposes highly relevant
news, and another that attaches attributes of interest to placenames.
The later achieved a 75% rate of thematic relevance under qualitative
analysis
- …