1,608 research outputs found
Cloud BI: Future of business intelligence in the Cloud
In self-hosted environments it was feared that business intelligence (BI) will eventually face a resource crunch situation due to the never ending expansion of data warehouses and the online analytical processing (OLAP) demands on the underlying networking. Cloud computing has instigated a new hope for future prospects of BI. However, how will BI be implemented on Cloud and how will the traffic and demand profile look like? This research attempts to answer these key questions in regards to taking BI to the Cloud. The Cloud hosting of BI has been demonstrated with the help of a simulation on OPNET comprising a Cloud model with multiple OLAP application servers applying parallel query loads on an array of servers hosting relational databases. The simulation results reflected that extensible parallel processing of database servers on the Cloud can efficiently process OLAP application demands on Cloud computing
Recommended from our members
UC Berkeley's Cory Hall: Evaluation of Challenges and Potential Applications of Building-to-Grid Implementation
From September 2009 through June 2010, a team of researchers developed, installed, and tested instrumentation on the energy flows in Cory Hall on the UC Berkeley campus to create a Building-to-Grid testbed. The UC Berkeley team was headed by Professor David Culler, and assisted by members from EnerNex, Lawrence Berkeley National Laboratory, California State University Sacramento, and the California Institute for Energy & Environment. While the Berkeley team mapped the load tree of the building, EnerNex researched types of meters, submeters, monitors, and sensors to be used (Task 1). Next the UC Berkeley team analyzed building needs and designed the network of metering components and data storage/visualization software (Task 2). After meeting with vendors in January, the UCB team procured and installed the components starting in late March (Task 3). Next, the UCB team tested and demonstrated the system (Task 4). Meanwhile, the CSUS team documented the methodology and steps necessary to implement a testbed (Task 5) and Harold Galicer developed a roadmap for the CSUS Smart Grid Center with results from the testbed (Task 5a) and evaluated the Cory Hall implementation process (Task 5b). The CSUS team also worked with local utilities to develop an approach to the energy information communication link between buildings and the utility (Task 6). The UC Berkeley team then prepared a roadmap to outline necessary technology development for Building-to-Grid, and presented the results of the project in early July (Task 7). Finally, CIEE evaluated the implementation, noting challenges and potential applications of Building-to-Grid (Task 8). These deliverables are available at the i4Energy site: http://i4energy.org/
Time Efficient Dynamic Processing of Big Data for Remote Sensing Application
Searching info on the web in today’s world can be considered as dragging a net across the surface of the earth. While a great deal may be caught in the net, there is still a huge amount of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information is buried down on dynamically produced sites, and standard search engines never find it, where data are hidden behind query interfaces. But a direct query is a "one at a time" laborious way to find info.Several factors contribute to making this problem particularly challenging. The Web is changing at a constant pace – new sources are added, and old sources are removed and modified. The remote wireless senses generate very massive amount real-time data from the Satellite or from the Aircraft with the assistance of the sensors. Technology trends for Big Data accept open source software, commodity servers, and massively parallel-distributed processing platforms. Analytics is at the core of exploiting values from Big Data to produce consumable insights for business and government. This paper presents architecture for Big Data Analytics and explores Big Data technologies offering SQL databases, Hadoop Distributed File System and Map-Reduce. The intended architecture has the aptness of storing incoming unprepared data to dispatch offline analysis on largely stored dumps when required. Concluding, a detailed analysis of remotely sensed earth observatory Big Data for ground or sea level are offered using Hadoop. The proposed architecture possess the ability of dividing, load balancing, and parallel processing of only useful data. Thus, it results in efficient analysis of real-time remote sensing Big Data using earth observatory system
An Exploratory-Descriptive Review of Main Big Data Analytics Reference Architectures – an IT Service Management Approach
Big Data Analytics (BDA) aims to create decision-making business value by applying multiple analytical procedures from the Statistics, Operations Research and Artificial Intelligence disciplines to huge internal and external business datasets. However, BDA requires high investments in IT resources – computing, storage, network, software, data, and environment -, and consequently the selection of the right-sized implementation is a hard business managerial decision. Parallelly, IT Service Management (ITSM) frameworks have provided best processes-practices to deliver value to end-users through the concept of IT services, and the provision of BDA as Service (BDAaaS) has now emerged. Consequently, from a dual BDA-ITSM perspective, delivering BDAaaS demands the design and implementation of a concrete BDAaaS architecture. Practitioner and academic literature on BDAaaS architectures is abundant but fragmented, disperse and uses a non-standard terminology. ITSM managers and academics involved on the problematic to deliver BDAaaS, thus, face the lack of mature practical guidelines and theoretical frameworks on BDAaaS architectures. In this research, consequently, with an exploratory-descriptive purpose, we contributed with an updated review of three main non-proprietary BDAaaS reference architectures to ITSM managers, and with a hybrid functional-deployment architectural view to the BDAaaS literature. However, given its exploratory status, further conceptual and empirical research is encouraged
Impliance: A Next Generation Information Management Appliance
ably successful in building a large market and adapting to the changes of the
last three decades, its impact on the broader market of information management
is surprisingly limited. If we were to design an information management system
from scratch, based upon today's requirements and hardware capabilities, would
it look anything like today's database systems?" In this paper, we introduce
Impliance, a next-generation information management system consisting of
hardware and software components integrated to form an easy-to-administer
appliance that can store, retrieve, and analyze all types of structured,
semi-structured, and unstructured information. We first summarize the trends
that will shape information management for the foreseeable future. Those trends
imply three major requirements for Impliance: (1) to be able to store, manage,
and uniformly query all data, not just structured records; (2) to be able to
scale out as the volume of this data grows; and (3) to be simple and robust in
operation. We then describe four key ideas that are uniquely combined in
Impliance to address these requirements, namely the ideas of: (a) integrating
software and off-the-shelf hardware into a generic information appliance; (b)
automatically discovering, organizing, and managing all data - unstructured as
well as structured - in a uniform way; (c) achieving scale-out by exploiting
simple, massive parallel processing, and (d) virtualizing compute and storage
resources to unify, simplify, and streamline the management of Impliance.
Impliance is an ambitious, long-term effort to define simpler, more robust, and
more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement
(http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute,
display, and perform the work, make derivative works and make commercial use
of the work, but, you must attribute the work to the author and CIDR 2007.
3rd Biennial Conference on Innovative Data Systems Research (CIDR) January
710, 2007, Asilomar, California, US
Driving Forces for Digital Transformation – Case Studies of Q-Commerce
Companies want to leverage emerging technologies to achieve digital transformation, but most of them are reluctant to take action. This study investigates the driving forces behind digital transformation investment by studying how HKTVmall and Pandamart have invested in new digital technologies to enable q-commerce in Hong Kong, China. Though Pandamart follows a modular approach and HKTVmall follows a staged approach to achieve digital transformation, we have found that in both cases, digital transformation has been driven by economic factors (cost reduction and revenue generation), social factors (changing demographic characteristics and changing customer behavior), and technological factors (proprietary technology advantages and new digital technology capabilities)
- …