1,618 research outputs found

    Bio-Inspired Multi-Agent Technology for Industrial Applications

    Get PDF

    An Approach to Model Checking of Multi-agent Data Analysis

    Full text link
    The paper presents an approach to verification of a multi-agent data analysis algorithm. We base correct simulation of the multi-agent system by a finite integer model. For verification we use model checking tool SPIN. Protocols of agents are written in Promela language and properties of the multi-agent data analysis system are expressed in logic LTL. We run several experiments with SPIN and the model.Comment: In Proceedings MOD* 2014, arXiv:1411.345

    The Semantic Grid: A future e-Science infrastructure

    No full text
    e-Science offers a promising vision of how computer and communication technology can support and enhance the scientific process. It does this by enabling scientists to generate, analyse, share and discuss their insights, experiments and results in an effective manner. The underlying computer infrastructure that provides these facilities is commonly referred to as the Grid. At this time, there are a number of grid applications being developed and there is a whole raft of computer technologies that provide fragments of the necessary functionality. However there is currently a major gap between these endeavours and the vision of e-Science in which there is a high degree of easy-to-use and seamless automation and in which there are flexible collaborations and computations on a global scale. To bridge this practice–aspiration divide, this paper presents a research agenda whose aim is to move from the current state of the art in e-Science infrastructure, to the future infrastructure that is needed to support the full richness of the e-Science vision. Here the future e-Science research infrastructure is termed the Semantic Grid (Semantic Grid to Grid is meant to connote a similar relationship to the one that exists between the Semantic Web and the Web). In particular, we present a conceptual architecture for the Semantic Grid. This architecture adopts a service-oriented perspective in which distinct stakeholders in the scientific process, represented as software agents, provide services to one another, under various service level agreements, in various forms of marketplace. We then focus predominantly on the issues concerned with the way that knowledge is acquired and used in such environments since we believe this is the key differentiator between current grid endeavours and those envisioned for the Semantic Grid

    Knowledge Engineering Architecture for the Analysis of Organizational Log Data: a software tool for log data analysis

    Get PDF
    Organisation log data are generate by software and can provide help to maintenance team addressing issues reported by the client. Once an application is in production, there are defects and other issues that need to be handled. This is also cause by customisation and maintenance of the software. That can compromise software integrity and functionality. This happening in production environment which the maintenance team don’t have access becomes a difficult to resolve. The issue must be handling in development environment which causes a condition to understand the problem in order to be able to fix it. To help with this, using log data from production to trace actions that occur of the issue. The log data doesn’t contain any of private data; it only contains actions events as a result of software usage. The main objective of this thesis work is to build a framework for an automatic log analyser to assist maintenance team addressing software issues. This framework also provides a knowledge management system allowing registering tacit experience into explicit knowledge. A prototype was developed to produce metrics and make a proof of this framework. This was done on a real environment and is related to a software migration project which means transferring data between databases that holds company business

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    A framework for knowledge discovery within business intelligence for decision support

    Get PDF
    Business Intelligence (BI) techniques provide the potential to not only efficiently manage but further analyse and apply the collected information in an effective manner. Benefiting from research both within industry and academia, BI provides functionality for accessing, cleansing, transforming, analysing and reporting organisational datasets. This provides further opportunities for the data to be explored and assist organisations in the discovery of correlations, trends and patterns that exist hidden within the data. This hidden information can be employed to provide an insight into opportunities to make an organisation more competitive by allowing manager to make more informed decisions and as a result, corporate resources optimally utilised. This potential insight provides organisations with an unrivalled opportunity to remain abreast of market trends. Consequently, BI techniques provide significant opportunity for integration with Decision Support Systems (DSS). The gap which was identified within the current body of knowledge and motivated this research, revealed that currently no suitable framework for BI, which can be applied at a meta-level and is therefore tool, technology and domain independent, currently exists. To address the identified gap this study proposes a meta-level framework: - ‘KDDS-BI’, which can be applied at an abstract level and therefore structure a BI investigation, irrespective of the end user. KDDS-BI not only facilitates the selection of suitable techniques for BI investigations, reducing the reliance upon ad-hoc investigative approaches which rely upon ‘trial and error’, yet further integrates Knowledge Management (KM) principles to ensure the retention and transfer of knowledge due to a structured approach to provide DSS that are based upon the principles of BI. In order to evaluate and validate the framework, KDDS-BI has been investigated through three distinct case studies. First KDDS-BI facilitates the integration of BI within ‘Direct Marketing’ to provide innovative solutions for analysis based upon the most suitable BI technique. Secondly, KDDS-BI is investigated within sales promotion, to facilitate the selection of tools and techniques for more focused in store marketing campaigns and increase revenue through the discovery of hidden data, and finally, operations management is analysed within a highly dynamic and unstructured environment of the London Underground Ltd. network through unique a BI solution to organise and manage resources, thereby increasing the efficiency of business processes. The three case studies provide insight into not only how KDDS-BI provides structure to the integration of BI within business process, but additionally the opportunity to analyse the performance of KDDS-BI within three independent environments for distinct purposes provided structure through KDDS-BI thereby validating and corroborating the proposed framework and adding value to business processes.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Верификация алгоритмов мультиагентного анализа данных с помощью системы проверки моделей SPIN

    Get PDF
    The paper presents an approach to formal verification of multi-agent data analysis algorithms for ontology population. The system agents correspond to information items of the input data and the rule of ontology population and data processing. They determine values of information objects obtained at the preliminary phase of the analysis. The agents working in parallel check the syntactic and semantic consistency of tuples of information items. Since the agents operate in parallel, it is necessary to verify some important properties of the system related to it, such as the property that the controller agent correctly determines the system termination. In our approach, the model checking tool SPIN is used. The protocols of agents are written in Promela language (the input language of the tool) and the properties of the multi-agent data analysis system are expressed in the liner time logic LTL. We carried out several experiments to check this model in various modes of the tool and various numbers of agents.В статье представлен подход к формальной верификации алгоритмов мультиагентного анализа данных для пополнения онтологий. Агенты системы на основе входных данных устанавливают значения элементов объектов, полученных на предварительной стадии анализа. Агенты параллельно осуществляют проверку семантической и синтаксической согласованности, используя правила пополнения онтологий и обработки данных. Поскольку агенты действуют параллельно, необходимо верифицировать некоторые важные свойства системы, связанные с этим, например, свойство корректности определения завершения работы системы. В нашем подходе используется инструмент проверки моделей SPIN. Протоколы агентов записаны на языке Promela, а свойства мультиагентной системы анализа данных выражены в логике LTL. Мы провели ряд экспериментов по проверке данной модели

    JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights

    Get PDF
    In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)

    Incomplete Innovation and the Premature Disruption of Legal Services

    Get PDF
    Article published in the Michigan State Law Review
    corecore