7,472 research outputs found

    BUSINESS INTELLIGENT AGENTS FOR ENTERPRISE APPLICATION

    Get PDF
    Fierce competition in a market increasingly crowded and frequent changes in consumer requirements are the main forces that will cause companies to change their current organization and management. One solution is to move to open architectures and virtual type, which requires addressing business methods and technologies using distributed multi-agent systems. Intelligent agents are one of the most important areas of artificial intelligence that deals with the development of hardware and software systems able to reason, learn to recognize natural language, speak, make decisions, to recognize objects in the working environment etc. Thus in this paper, we presented some aspects of smart business, intelligent agents, intelligent systems, intelligent systems models, and I especially emphasized their role in managing business processes, which have become highly complex systems that are in a permanent change to meet the requirements of timely decision making. The purpose of this paper is to prove that there is no business without using the integration Business Process Management, Web Services and intelligent agents.business intelligence, intelligent agents, intelligent systems, management, enterprise, web services

    Querying Large Physics Data Sets Over an Information Grid

    Get PDF
    Optimising use of the Web (WWW) for LHC data analysis is a complex problem and illustrates the challenges arising from the integration of and computation across massive amounts of information distributed worldwide. Finding the right piece of information can, at times, be extremely time-consuming, if not impossible. So-called Grids have been proposed to facilitate LHC computing and many groups have embarked on studies of data replication, data migration and networking philosophies. Other aspects such as the role of 'middleware' for Grids are emerging as requiring research. This paper positions the need for appropriate middleware that enables users to resolve physics queries across massive data sets. It identifies the role of meta-data for query resolution and the importance of Information Grids for high-energy physics analysis rather than just Computational or Data Grids. This paper identifies software that is being implemented at CERN to enable the querying of very large collaborating HEP data-sets, initially being employed for the construction of CMS detectors.Comment: 4 pages, 3 figure

    A Survey on Service Composition Middleware in Pervasive Environments

    Get PDF
    The development of pervasive computing has put the light on a challenging problem: how to dynamically compose services in heterogeneous and highly changing environments? We propose a survey that defines the service composition as a sequence of four steps: the translation, the generation, the evaluation, and finally the execution. With this powerful and simple model we describe the major service composition middleware. Then, a classification of these service composition middleware according to pervasive requirements - interoperability, discoverability, adaptability, context awareness, QoS management, security, spontaneous management, and autonomous management - is given. The classification highlights what has been done and what remains to do to develop the service composition in pervasive environments

    From process logic to business logic - A cognitive approach to business process management

    Get PDF
    The unpredictability of business activities means that business process management should provide a way to adapt to change. The traditional workflow approach, based on predefined process logic, offers little support for today's complex and dynamic business environment. Therefore, a cognitive approach is proposed to help manage complex business activities, based on continuous awareness of situations and real-time decisions on activities. In this approach, the business environment is seen as capturing events that occurred and the state of tasks and resources; business logic involving process routing, operational constraints, exception handling and business strategy is used to determine which actions are appropriate for the current situation. By extending process management from process logic to business logic, the methodology offers flexibility, agility and adaptability in complex business process management. © 2005 Elsevier B.V. All rights reserved.postprin

    Designing Reusable Systems that Can Handle Change - Description-Driven Systems : Revisiting Object-Oriented Principles

    Full text link
    In the age of the Cloud and so-called Big Data systems must be increasingly flexible, reconfigurable and adaptable to change in addition to being developed rapidly. As a consequence, designing systems to cater for evolution is becoming critical to their success. To be able to cope with change, systems must have the capability of reuse and the ability to adapt as and when necessary to changes in requirements. Allowing systems to be self-describing is one way to facilitate this. To address the issues of reuse in designing evolvable systems, this paper proposes a so-called description-driven approach to systems design. This approach enables new versions of data structures and processes to be created alongside the old, thereby providing a history of changes to the underlying data models and enabling the capture of provenance data. The efficacy of the description-driven approach is exemplified by the CRISTAL project. CRISTAL is based on description-driven design principles; it uses versions of stored descriptions to define various versions of data which can be stored in diverse forms. This paper discusses the need for capturing holistic system description when modelling large-scale distributed systems.Comment: 8 pages, 1 figure and 1 table. Accepted by the 9th Int Conf on the Evaluation of Novel Approaches to Software Engineering (ENASE'14). Lisbon, Portugal. April 201

    Designing Traceability into Big Data Systems

    Full text link
    Providing an appropriate level of accessibility and traceability to data or process elements (so-called Items) in large volumes of data, often Cloud-resident, is an essential requirement in the Big Data era. Enterprise-wide data systems need to be designed from the outset to support usage of such Items across the spectrum of business use rather than from any specific application view. The design philosophy advocated in this paper is to drive the design process using a so-called description-driven approach which enriches models with meta-data and description and focuses the design process on Item re-use, thereby promoting traceability. Details are given of the description-driven design of big data systems at CERN, in health informatics and in business process management. Evidence is presented that the approach leads to design simplicity and consequent ease of management thanks to loose typing and the adoption of a unified approach to Item management and usage.Comment: 10 pages; 6 figures in Proceedings of the 5th Annual International Conference on ICT: Big Data, Cloud and Security (ICT-BDCS 2015), Singapore July 2015. arXiv admin note: text overlap with arXiv:1402.5764, arXiv:1402.575
    corecore