1,617 research outputs found
A user driven method for database reverse engineering
In this thesis we describe the UQoRE method which supports database reverse
engineering by using a data mining technique. Generally, Reverse Engineering
methods work by using information extracted from data dictionaries, database
extensions, application programs and expert users. The main differences
between all these methods rely on the assumptions made on the a-priori
knowledge available about the database (schema and constraints on attributes)
as well as the user competence. Most of them are based on the attribute name
consistency. This paper presents a method based on user queries. Queries are
stored in a “Query Base” and our system mines this new source of knowledge in
order to discover hidden links and similarity between database elements
Recommended from our members
Implementing Enterprise Resource Planning Systems: A Study of Benefits and Concerns
In the 1990\u27s information technology and business process re-engtneermg have combined to provide organizations a competitive advantage. Enterprise Resource Planning (ERP) systems were particularly considered examples representing such development. This paper reports the results of a survey on ERF implementation to explore its benefits and concerns. Our results show companies can expect more intrafirm benefits, such as reduced inventory, improved quality, and shortened cycle time, than interfirm benefits from current ERP technology. Existing ERP technology is not yet capable of handling the complexity of the whole supply chain. More supplier relationship management functionalities need to be integrated. Our results also suggest that so-called best practices of current ERP technology fit financial processes better than manufacturing and operational processes in today\u27s business environment. Hence business process reengineering efforts are necessary but not sufficient to the success of an ERP system implementation
Combining business process and failure modelling to increase yield in electronics manufacturing
The prediction and capturing of defects in low-volume assembly of electronics is
a technical challenge that is a prerequisite for design for manufacturing (DfM) and business
process improvement (BPI) to increase first-time yields and reduce production costs. Failures
at the component-level (component defects) and system-level (such as defects in design and
manufacturing) have not been incorporated in combined prediction models. BPI efforts should
have predictive capability while supporting flexible production and changes in business models.
This research was aimed at the integration of enterprise modelling (EM) and failure models (FM)
to support business decision making by predicting system-level defects. An enhanced business
modelling approach which provides a set of accessible failure models at a given business process
level is presented in this article. This model-driven approach allows the evaluation of product
and process performance and hence feedback to design and manufacturing activities hence
improving first-time yield and product quality. A case in low-volume, high-complexity electronics
assembly industry shows how the approach leverages standard modelling techniques
and facilitates the understanding of the causes of poor manufacturing performance using a
set of surface mount technology (SMT) process failure models. A prototype application tool
was developed and tested in a collaborator site to evaluate the integration of business process
models with the execution entities, such as software tools, business database, and simulation
engines. The proposed concept was tested for the defect data collection and prediction in the
described case study
How to Juggle Columns: An Entropy-Based Approach for Table Compression
Many relational databases exhibit complex dependencies between data attributes, caused either by the nature of the underlying data or by explicitly denormalized schemas. In data warehouse scenarios, calculated key figures may be materialized or hierarchy levels may be held within a single dimension table. Such column correlations and the resulting data redundancy may result in additional storage requirements. They may also result in bad query performance if inappropriate independence assumptions are made during query compilation. In this paper, we tackle the specific problem of detecting functional dependencies between columns to improve the compression rate for column-based database systems, which both reduces main memory consumption and improves query performance. Although a huge variety of algorithms have been proposed for detecting column dependencies in databases, we maintain that increased data volumes and recent developments in hardware architectures demand novel algorithms with much lower runtime overhead and smaller memory footprint. Our novel approach is based on entropy estimations and exploits a combination of sampling and multiple heuristics to render it applicable for a wide range of use cases. We demonstrate the quality of our approach by means of an implementation within the SAP NetWeaver Business Warehouse Accelerator. Our experiments indicate that our approach scales well with the number of columns and produces reliable dependence structure information. This both reduces memory consumption and improves performance for nontrivial queries
E-Business Data Warehouse Design and Implementation
E-Business have a variety of on-line transaction processing (OLTP) systems and operational database. Data Warehouse is different from operational database. A data warehouse is a subject-oriented, integrated, time-variant, and non-volatile collection of data in support of management\u27s decision-making process. The features of Data Warehouse cause the its design process and strategies to be different from the ones for OLTP Systems. This paper presents a brief description of approaches that address the data warehouse Design and Implementation for E-business
Big data analytics:Computational intelligence techniques and application areas
Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
- …