1,490 research outputs found

    The BLue Amazon Brain (BLAB): A Modular Architecture of Services about the Brazilian Maritime Territory

    Full text link
    We describe the first steps in the development of an artificial agent focused on the Brazilian maritime territory, a large region within the South Atlantic also known as the Blue Amazon. The "BLue Amazon Brain" (BLAB) integrates a number of services aimed at disseminating information about this region and its importance, functioning as a tool for environmental awareness. The main service provided by BLAB is a conversational facility that deals with complex questions about the Blue Amazon, called BLAB-Chat; its central component is a controller that manages several task-oriented natural language processing modules (e.g., question answering and summarizer systems). These modules have access to an internal data lake as well as to third-party databases. A news reporter (BLAB-Reporter) and a purposely-developed wiki (BLAB-Wiki) are also part of the BLAB service architecture. In this paper, we describe our current version of BLAB's architecture (interface, backend, web services, NLP modules, and resources) and comment on the challenges we have faced so far, such as the lack of training data and the scattered state of domain information. Solving these issues presents a considerable challenge in the development of artificial intelligence for technical domains

    POLICY PROCESSES SUPPORT THROUGH INTEROPERABILITY WITH SOCIAL MEDIA

    Get PDF
    Governments of many countries attempt to increase public participation by exploiting the capabilities and high penetration of the Internet. In this direction they make considerable investments for constructing and operating e-participation websites; however, the use of them has been in general limited and below expectations. For this reason governments, in order to widen e-participation, should investigate the exploitation of the numerous users-driven Web 2.0 social media as well, which seem to be quite successful in attracting huge numbers of users. This paper describes a methodology for the exploitation of the Web 2.0 social media by government organizations in the processes of public policies formulation, through a central platform-toolset providing interoperability with many different social media, and enabling posting and retrieving content from them in a systematic centrally managed and machinesupported automated manner (through their application programming interfaces (APIs)). The proposed methodology includes the use of ‘Policy Gadgets’ (Padgets), which are defined as micro web applications presenting policy messages in various popular Web 2.0 social media (e.g. social networks, blogs, forums, news sites, etc) and collecting users’ interactions with them (e.g. views, comments, ratings, votes, etc.). Interaction data can be used as input in policy simulation models estimating the impact of various policy options. Encouraging have been the conclusions from the analysis of the APIs of 10 highly popular social media, which provide extensive capabilities for publishing content on them (e.g. data, images, video, links, etc.) and also for retrieving relevant user activity and content (e.g. views, comments, ratings, votes, etc.), though their continuous evolution might pose significant difficulties and challenges

    LEVERAGING TEXT MINING FOR THE DESIGN OF A LEGAL KNOWLEDGE MANAGEMENT SYSTEM

    Get PDF
    In today’s globalized world, companies are faced with numerous and continuously changing legal requirements. To ensure that these companies are compliant with legal regulations, law and consulting firms use open legal data published by governments worldwide. With this data pool growing rapidly, the complexity of legal research is strongly increasing. Despite this fact, only few research papers consider the application of information systems in the legal domain. Against this backdrop, we pro-pose a knowledge management (KM) system that aims at supporting legal research processes. To this end, we leverage the potentials of text mining techniques to extract valuable information from legal documents. This information is stored in a graph database, which enables us to capture the relation-ships between these documents and users of the system. These relationships and the information from the documents are then fed into a recommendation system which aims at facilitating knowledge transfer within companies. The prototypical implementation of the proposed KM system is based on 20,000 legal documents and is currently evaluated in cooperation with a Big 4 accounting company

    January-March 2007

    Get PDF

    Classifying Real Money Trading In Virtual World

    Get PDF
    Virtual world activities related to the buying and selling of virtual currency, virtual items, and services with real world money are referred as Real Money Trading (RMT). Although there is a great deal of evidence for the growth of RMT in virtual world, there is also evidence to suggest that many companies are struggling to become involved with RMT. A framework for classifying RMT in virtual world is essential for devising successful virtual business strategies. A key component in the process of formulating the optimal competitive strategy is to understand the unique characteristics of RMT and the implications behind those characteristics. This study aims to propose a classification of RMT based upon the characteristics of products and services, the transaction and marketplace, as well as the currency and exchange systems

    SCARY DARK SIDE OF ARTIFICIAL INTELLIGENCE: A PERILOUS CONTRIVANCE TO MANKIND

    Get PDF
    Purpose of Study: The purpose of the study is to investigate the dark side of artificial intelligence followed by the question of whether AI is programmed to do something destructive or AI is programmed to do something beneficial? Methodology: A study of different biased Super AI is carried out to find the dark side of AI. In this paper SRL (system review of literature approach methodology is used and the data is collected from the different projects of MIT’s media lab named “Norman AI”, “Shelley” and  AI-generated algorithm COMPAS. Main Finding: The study carried out the result if AI is trained in a biased way it will create havoc to mankind. Implications/Applications: The article can help in developing super-AIs which can benefit the society in a controlled way without having any negative aspects. Novelty/originality of the study: Our findings ensure that biased AI has a negative impact on society

    A Synthetical Approach for Blog Recommendation Mechanism: Trust, Social Relation, and Semantic Analysis

    Get PDF
    Weblog is a good paradigm of online social network which constitutes web-based regularly updated journals with reverse chronological sequences of dated entries, usually with blogrolls on the sidebars, allowing bloggers link to favorite site which they are frequently visited. In this study we propose an elaborate blog recommendation mechanism that combines trust model, social relation and semantic analysis and illustrate how it can be applied to a prestigious online blogging system – Wretch in Taiwan. By preliminary results of experimental study, we found some implications and empirically prove some theories in domain of social networking, and the example reveals that the proposed recommendation mechanism is quite feasible and promising

    Developing a large semantically annotated corpus

    Get PDF
    International audienceWhat would be a good method to provide a large collection of semantically annotated texts with formal, deep semantics rather than shallow? We argue that a bootstrapping approach comprising state-of-the-art NLP tools for parsing and semantic interpretation, in combination with a wiki-like interface for collaborative annotation of experts, and a game with a purpose for crowdsourcing, are the starting ingredients for fulfilling this enterprise. The result is a semantic resource that anyone can edit and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles, rhetorical relations and presuppositions, into a single semantic formalism: Discourse Representation Theory. Taking texts rather than sentences as the units of annotation results in deep semantic representations that incorporate discourse structure and dependencies. To manage the various (possibly conflicting) annotations provided by experts and non-experts, we introduce a method that stores " Bits of Wisdom " in a database as stand-off annotations
    corecore