848 research outputs found
The Information Systems (IS) Role of Accountants: A Case Study of an On-line Analytical Processing (OLAP) Implementation
Today\u27s organisations place heavy reliance on computerised information systems (CIS) for the provision of timely and quality information to management. The quality of an accounting information system (AIS) is critical to the success of a firm. Executives now require real-time information with multidimensional views to manage firms operating in a dynamic and competitive environment. The use of OLAP technology in financial reporting will greatly improve the flexibility of information available from various databases. This study reports a case of implementing an OLAP tool to build complex financial reports for the use of senior management. The case illustrates the importance of the IS role of accountants with the emergence of the systems accounting role and the benefits of OLAP to accountants
Dynamic Physiological Partitioning on a Shared-nothing Database Cluster
Traditional DBMS servers are usually over-provisioned for most of their daily
workloads and, because they do not show good-enough energy proportionality,
waste a lot of energy while underutilized. A cluster of small (wimpy) servers,
where its size can be dynamically adjusted to the current workload, offers
better energy characteristics for these workloads. Yet, data migration,
necessary to balance utilization among the nodes, is a non-trivial and
time-consuming task that may consume the energy saved. For this reason, a
sophisticated and easy to adjust partitioning scheme fostering dynamic
reorganization is needed. In this paper, we adapt a technique originally
created for SMP systems, called physiological partitioning, to distribute data
among nodes, that allows to easily repartition data without interrupting
transactions. We dynamically partition DB tables based on the nodes'
utilization and given energy constraints and compare our approach with physical
partitioning and logical partitioning methods. To quantify possible energy
saving and its conceivable drawback on query runtimes, we evaluate our
implementation on an experimental cluster and compare the results w.r.t.
performance and energy consumption. Depending on the workload, we can
substantially save energy without sacrificing too much performance
Recommended from our members
Examining the Present and Looking to the Future of DSS and Intelligent Systems
In many respects, the decision making capability/promise of information technology has gone unfulfilled. In fact, many organizations have not advanced much past spreadsheets when it comes to computerized decision making assistance. This research attempts to examine why this is the case, and looks to the future by asking the questions “What’s the next spreadsheet? Is there a next killer app for intelligent systems/DSS?” Fifty-eight business professionals were surveyed to assist in answering these questions. Results suggest that while the spreadsheet is still by far the most used intelligent system, continuing improvements in the ease of use of information technology are helping to allow some organizations to begin to easily test and use newer DSS technologies. As additional organizations then begin to understand the purpose and usefulness of these newer technologies, their long term impact could be substantial. Statistical results suggest that Knowledge Management and GDSS technologies have the best chance in the near term to equal the impact of spreadsheets
The End of Slow Networks: It's Time for a Redesign
Next generation high-performance RDMA-capable networks will require a
fundamental rethinking of the design and architecture of modern distributed
DBMSs. These systems are commonly designed and optimized under the assumption
that the network is the bottleneck: the network is slow and "thin", and thus
needs to be avoided as much as possible. Yet this assumption no longer holds
true. With InfiniBand FDR 4x, the bandwidth available to transfer data across
network is in the same ballpark as the bandwidth of one memory channel, and it
increases even further with the most recent EDR standard. Moreover, with the
increasing advances of RDMA, the latency improves similarly fast. In this
paper, we first argue that the "old" distributed database design is not capable
of taking full advantage of the network. Second, we propose architectural
redesigns for OLTP, OLAP and advanced analytical frameworks to take better
advantage of the improved bandwidth, latency and RDMA capabilities. Finally,
for each of the workload categories, we show that remarkable performance
improvements can be achieved
Easy designing steps of a local data warehouse for possible analytical data processing
Data warehouse (DW) are used in local or global level as per usages. Most of the DW was designed for online purposes targeting the multinational firms. Majority of local firms directly purchase such readymade DW applications for their usages. Customization, maintenance and enhancement are very costly for them. To provide fruitful e-services, the Government departments, academic Institutes, firms, Telemedicine firms etc. need a DW of themselves. Lack of electricity and internet facilities, especially in rural areas, does not motivate citizen to use the benefits of e-services. In this digital world, every local firm is interested in having their DW that may support strategic and decision making for the business. This study highlights the basic technical designing steps of a local DW. It gives several possible solutions that may arise during the design of the process of Extraction Transformation and Loading (ETL). It gives detail steps to develop the dimension table, fact table and loading data. Data analytics normally answers business questions and suggest future solutions
Multidimensional Range Queries on Modern Hardware
Range queries over multidimensional data are an important part of database
workloads in many applications. Their execution may be accelerated by using
multidimensional index structures (MDIS), such as kd-trees or R-trees. As for
most index structures, the usefulness of this approach depends on the
selectivity of the queries, and common wisdom told that a simple scan beats
MDIS for queries accessing more than 15%-20% of a dataset. However, this wisdom
is largely based on evaluations that are almost two decades old, performed on
data being held on disks, applying IO-optimized data structures, and using
single-core systems. The question is whether this rule of thumb still holds
when multidimensional range queries (MDRQ) are performed on modern
architectures with large main memories holding all data, multi-core CPUs and
data-parallel instruction sets. In this paper, we study the question whether
and how much modern hardware influences the performance ratio between index
structures and scans for MDRQ. To this end, we conservatively adapted three
popular MDIS, namely the R*-tree, the kd-tree, and the VA-file, to exploit
features of modern servers and compared their performance to different flavors
of parallel scans using multiple (synthetic and real-world) analytical
workloads over multiple (synthetic and real-world) datasets of varying size,
dimensionality, and skew. We find that all approaches benefit considerably from
using main memory and parallelization, yet to varying degrees. Our evaluation
indicates that, on current machines, scanning should be favored over parallel
versions of classical MDIS even for very selective queries
Implementation of Multidimensional Databases with Document-Oriented NoSQL
International audienceNoSQL (Not Only SQL) systems are becoming popular due to known advantages such as horizontal scalability and elasticity. In this paper, we study the implementation of data warehouses with document-oriented NoSQL systems. We propose mapping rules that transform the multidimensional data model to logical document-oriented models. We consider three different logical models and we use them to instantiate data warehouses. We focus on data loading, model-to-model conversion and OLAP cuboid computation
E-Business Security Architectures
By default the Internet is an open high risk environment and also the main place where the e-business is growing. As result of this fact, the paper aims to highlight the security aspects that relate to distributed applications [3], with reference to the concept of e-business. In this direction will analyze the quality characteristics, considered to be important by the author. Based on these and on existing e-business architectures will be presented a particularly diagram which will reflect a new approach to the concept of future e-business. The development of the new architecture will have its stands based on technologies that are used to build the applications of tomorrow.e-business, distributed applications, security, architecture, technology
- …