54,188 research outputs found
Requirements for Information Extraction for Knowledge Management
Knowledge Management (KM) systems inherently suffer from the knowledge acquisition bottleneck - the difficulty of modeling and formalizing knowledge relevant for specific domains. A potential solution to this problem is Information Extraction (IE) technology. However, IE was originally developed for database population and there is a mismatch between what is required to successfully perform KM and what current IE technology provides. In this paper we begin to address this issue by outlining requirements for IE based KM
Web Data Extraction, Applications and Techniques: A Survey
Web Data Extraction is an important problem that has been studied by means of
different scientific tools and in a broad range of applications. Many
approaches to extracting data from the Web have been designed to solve specific
problems and operate in ad-hoc domains. Other approaches, instead, heavily
reuse techniques and algorithms developed in the field of Information
Extraction.
This survey aims at providing a structured and comprehensive overview of the
literature in the field of Web Data Extraction. We provided a simple
classification framework in which existing Web Data Extraction applications are
grouped into two main classes, namely applications at the Enterprise level and
at the Social Web level. At the Enterprise level, Web Data Extraction
techniques emerge as a key tool to perform data analysis in Business and
Competitive Intelligence systems as well as for business process
re-engineering. At the Social Web level, Web Data Extraction techniques allow
to gather a large amount of structured data continuously generated and
disseminated by Web 2.0, Social Media and Online Social Network users and this
offers unprecedented opportunities to analyze human behavior at a very large
scale. We discuss also the potential of cross-fertilization, i.e., on the
possibility of re-using Web Data Extraction techniques originally designed to
work in a given domain, in other domains.Comment: Knowledge-based System
Exploiting rules and processes for increasing flexibility in service composition
Recent trends in the use of service oriented architecture for designing, developing, managing, and using distributed applications have resulted in an increasing number of independently developed and physically distributed services. These services can be discovered, selected and composed to develop new applications and to meet emerging user requirements. Service composition is generally defined on the basis of business processes in which the underlying composition logic is guided by specifying control and data flows through Web service interfaces. User demands as well as the services themselves may change over time, which leads to replacing or adjusting the composition logic of previously defined processes. Coping with change is still one of the fundamental problems in current process based composition approaches. In this paper, we exploit declarative and imperative design styles to achieve better flexibility in service composition
Impliance: A Next Generation Information Management Appliance
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement
(http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute,
display, and perform the work, make derivative works and make commercial use
of the work, but, you must attribute the work to the author and CIDR 2007.
3rd Biennial Conference on Innovative Data Systems Research (CIDR) January
710, 2007, Asilomar, California, US
The Country-specific Organizational and Information Architecture of ERP Systems at Globalised Enterprises
The competition on the market forces companies to adapt to the changing environment. Most recently, the economic and financial crisis has been accelerating the alteration of both business and IT models of enterprises. The forces of globalization and internationalization motivate the restructuring of business processes and consequently IT processes. To depict the changes in a unified framework, we need the concept of Enterprise Architecture as a theoretical approach that deals with various tiers, aspects and views of business processes and different layers of application, software and hardware systems. The paper outlines a wide-range theoretical background for analyzing the re-engineering and re-organization of ERP systems at international or transnational companies in the middle-sized EU member states. The research carried out up to now has unravelled the typical structural changes, the models for internal business networks and their modification that reflect the centralization, decentralization and hybrid approaches. Based on the results obtained recently, a future research program has been drawn up to deepen our understanding of the trends within the world of ERP systems.Information System; ERP; Enterprise Resource Planning; Enterprise Architecture; Globalization; Centralization; Decentralization; Hybrid
Expressing business rules : a fact based approach : a thesis presented in partial fulfilment of the requirements for the degree of Master of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand
Numerous industry surveys have suggested that many IT projects still end in failure. Incomplete, ambiguous and inaccurate specifications are cited as a major causal factor. Traditional techniques for specifying data requirements often lack the expressiveness with which to model subtle but common features within organisations. As a consequence, categories of business rules that determine the structure and behaviour of organisations may not be captured until the latter stages of the systems development lifecycle. A fact-based technique called Object Role Modelling (ORM) has been investigated as an altemative approach for specifying data requirements. The technique's ability to capture and represent a wide range of data requirements rigorously, but still in a form comprehensible to business people, could provide a powerful tool for analysts. In this report, ORM constructs have been synthesised with the concepts and definitions provided by the Business Rules Group (BRG), who have produced a detailed taxonomy of business rule categories. In doing so, business rules discovered in an organisation can be expressed in a form that is meaningful to both analysts and business people. Exploiting the expressive simplicity of a conceptual modelling technique to articulate an organisation's business rules could help to fill a significant requirements gap
Applying model-driven paradigm: CALIPSOneo experience
Model-Driven Engineering paradigm is being used by the research community in the last years, obtaining suitable results. However, there are few practical experiences in the enterprise field. This paper presents the use of this paradigm in an aeronautical PLM project named CALIPSOneo currently under development in Airbus. In this context, NDT methodology was adapted as methodology in order to be used by the development team. The paper presents this process and the results that we are getting from the project. Besides, some relevant learned lessons from the trenches are concluded.Ministerio de Ciencia e Innovación TIN2010-20057-C03-02Junta de Andalucía TIC-578
The necessities for building a model to evaluate Business Intelligence projects- Literature Review
In recent years Business Intelligence (BI) systems have consistently been
rated as one of the highest priorities of Information Systems (IS) and business
leaders. BI allows firms to apply information for supporting their processes
and decisions by combining its capabilities in both of organizational and
technical issues. Many of companies are being spent a significant portion of
its IT budgets on business intelligence and related technology. Evaluation of
BI readiness is vital because it serves two important goals. First, it shows
gaps areas where company is not ready to proceed with its BI efforts. By
identifying BI readiness gaps, we can avoid wasting time and resources. Second,
the evaluation guides us what we need to close the gaps and implement BI with a
high probability of success. This paper proposes to present an overview of BI
and necessities for evaluation of readiness. Key words: Business intelligence,
Evaluation, Success, ReadinessComment: International Journal of Computer Science & Engineering Survey
(IJCSES) Vol.3, No.2, April 201
- …