8,597 research outputs found
The necessities for building a model to evaluate Business Intelligence projects- Literature Review
In recent years Business Intelligence (BI) systems have consistently been
rated as one of the highest priorities of Information Systems (IS) and business
leaders. BI allows firms to apply information for supporting their processes
and decisions by combining its capabilities in both of organizational and
technical issues. Many of companies are being spent a significant portion of
its IT budgets on business intelligence and related technology. Evaluation of
BI readiness is vital because it serves two important goals. First, it shows
gaps areas where company is not ready to proceed with its BI efforts. By
identifying BI readiness gaps, we can avoid wasting time and resources. Second,
the evaluation guides us what we need to close the gaps and implement BI with a
high probability of success. This paper proposes to present an overview of BI
and necessities for evaluation of readiness. Key words: Business intelligence,
Evaluation, Success, ReadinessComment: International Journal of Computer Science & Engineering Survey
(IJCSES) Vol.3, No.2, April 201
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Knowledge Warehouse: An Architectural Integration of Knowledge Management, Decision Support, Artificial Intelligence and Data Warehousing
Decision support systems (DSS) are becoming increasingly more critical to the daily operation of organizations. Data warehousing, an integral part of this, provides an infrastructure that enables businesses to extract, cleanse, and store vast amounts of data. The basic purpose of a data warehouse is to empower the knowledge workers with information that allows them to make decisions based on a solid foundation of fact. However, only a fraction of the needed information exists on computers; the vast majority of a firm’s intellectual assets exist as knowledge in the minds of its employees. What is needed is a new generation of knowledge-enabled systems that provides the infrastructure needed to capture, cleanse, store, organize, leverage, and disseminate not only data and information but also the knowledge of the firm. The purpose of this paper is to propose, as an extension to the data warehouse model, a knowledge warehouse (KW) architecture that will not only facilitate the capturing and coding of knowledge but also enhance the retrieval and sharing of knowledge across the organization. The knowledge warehouse proposed here suggests a different direction for DSS in the next decade. This new direction is based on an expanded purpose of DSS. That is, the purpose of DSS in knowledge improvement. This expanded purpose of DSS also suggests that the effectiveness of a DS will, in the future, be measured based on how well it promotes and enhances knowledge, how well it improves the mental model(s) and understanding of the decision maker(s) and thereby how well it improves his/her decision making
Business Intelligence in the Vineyard
The evolution that is nowadays taking place in the information and communication fields, namely in mobile computing and remote monitoring, constitutes a very interesting challenge to the agricultural sector. This reality places agronomic knowledge in centre stage as these technologies are dramatically improving data collection and storage capacities, challenging the farmers and the agricultural field experts to develop processes that efficiently transform data into information and knowledge and are able to support the everyday decision making at farm level. In this work we will present a demonstration project under way in a vineyard in Portugal where we are exploring the potential of the most recent technological innovations available in the market to build the i-Farm, the information and knowledge society intelligent farm.
i-Farm (intelligent farm) applies at farm level the potential offered by using in an integrated way mobile solutions, sensor networks, wireless communication and digital imagery materialized in a information system that supports farmer real time decision making in the field and in the office.
The i-Farm project creates a unique knowledge repository containing information from multiple sources (crop, environment, soil, operations, market, etc.) enabling accurate and timely decisions.
For the project development a Business Intelligence approach is used. In the context of this paper this broad term is used to refer to the process of aggregating, processing and building rich and relevant information which is made available dynamically in real time to managers in an interactive way to support decisions and planning activitiesinfo:eu-repo/semantics/publishedVersio
SOA-enabled compliance management: Instrumenting, assessing, and analyzing service-based business processes
Facilitating compliance management, that is, assisting a company's management in conforming to laws, regulations, standards, contracts, and policies, is a hot but non-trivial task. The service-oriented architecture (SOA) has evolved traditional, manual business practices into modern, service-based IT practices that ease part of the problem: the systematic definition and execution of business processes. This, in turn, facilitates the online monitoring of system behaviors and the enforcement of allowed behaviors-all ingredients that can be used to assist compliance management on the fly during process execution. In this paper, instead of focusing on monitoring and runtime enforcement of rules or constraints, we strive for an alternative approach to compliance management in SOAs that aims at assessing and improving compliance. We propose two ingredients: (i) a model and tool to design compliant service-based processes and to instrument them in order to generate evidence of how they are executed and (ii) a reporting and analysis suite to create awareness of a company's compliance state and to enable understanding why and where compliance violations have occurred. Together, these ingredients result in an approach that is close to how the real stakeholders-compliance experts and auditors-actually assess the state of compliance in practice and that is less intrusive than enforcing compliance. © 2013 Springer-Verlag London
An efficient genetic algorithm for large-scale planning of robust industrial wireless networks
An industrial indoor environment is harsh for wireless communications
compared to an office environment, because the prevalent metal easily causes
shadowing effects and affects the availability of an industrial wireless local
area network (IWLAN). On the one hand, it is costly, time-consuming, and
ineffective to perform trial-and-error manual deployment of wireless nodes. On
the other hand, the existing wireless planning tools only focus on office
environments such that it is hard to plan IWLANs due to the larger problem size
and the deployed IWLANs are vulnerable to prevalent shadowing effects in harsh
industrial indoor environments. To fill this gap, this paper proposes an
overdimensioning model and a genetic algorithm based over-dimensioning (GAOD)
algorithm for deploying large-scale robust IWLANs. As a progress beyond the
state-of-the-art wireless planning, two full coverage layers are created. The
second coverage layer serves as redundancy in case of shadowing. Meanwhile, the
deployment cost is reduced by minimizing the number of access points (APs); the
hard constraint of minimal inter-AP spatial paration avoids multiple APs
covering the same area to be simultaneously shadowed by the same obstacle. The
computation time and occupied memory are dedicatedly considered in the design
of GAOD for large-scale optimization. A greedy heuristic based
over-dimensioning (GHOD) algorithm and a random OD algorithm are taken as
benchmarks. In two vehicle manufacturers with a small and large indoor
environment, GAOD outperformed GHOD with up to 20% less APs, while GHOD
outputted up to 25% less APs than a random OD algorithm. Furthermore, the
effectiveness of this model and GAOD was experimentally validated with a real
deployment system
Artificial Intelligence in a Main Warehouse in Panasonic: Los Indios, Texas
The Panasonic Company warehouse is located in Los Indios Texas. The warehouse presents the limitation of the great distances between its headquarters and the Main Warehouse for supplying the branches and main customers, which requires a considerable amount of time to maintain effective communication in the inventory area. In addition, during an online review, it can be confirmed that the website is disabled, contradicting its corporate policy.
The structure of the thesis proposal is arranged in four chapters from the Introduction, Statement of the Problem and Purposes; Previous Studies and Definition of the literature; the Research Methodology and the resources for data collection, the results, the proposal, and the conclusions. This paper ends with a list of references from different substantial sources that facilitated the research
An Innovative Approach for Predicting Software Defects by Handling Class Imbalance Problem
From last decade unbalanced data has gained attention as a major challenge for enhancing software quality and reliability. Due to evolution in advanced software development tools and processes, today’s developed software product is much larger and complicated in nature. The software business faces a major issue in maintaining software performance and efficiency as well as cost of handling software issues after deployment of software product. The effectiveness of defect prediction model has been hampered by unbalanced data in terms of data analysis, biased result, model accuracy and decision making. Predicting defects before they affect your software product is one way to cut costs required to maintain software quality. In this study we are proposing model using two level approach for class imbalance problem which will enhance accuracy of prediction model. In the first level, model will balance predictive class at data level by applying sampling method. Second level we will use Random Forest machine learning approach which will create strong classifier for software defect. Hence, we can enhance software defect prediction model accuracy by handling class imbalance issue at data and algorithm level
- …