339 research outputs found
COSPO/CENDI Industry Day Conference
The conference's objective was to provide a forum where government information managers and industry information technology experts could have an open exchange and discuss their respective needs and compare them to the available, or soon to be available, solutions. Technical summaries and points of contact are provided for the following sessions: secure products, protocols, and encryption; information providers; electronic document management and publishing; information indexing, discovery, and retrieval (IIDR); automated language translators; IIDR - natural language capabilities; IIDR - advanced technologies; IIDR - distributed heterogeneous and large database support; and communications - speed, bandwidth, and wireless
Growth of relational model: Interdependence and complementary to big data
A database management system is a constant application of science that provides a platform for the creation, movement, and use of voluminous data. The area has witnessed a series of developments and technological advancements from its conventional structured database to the recent buzzword, bigdata. This paper aims to provide a complete model of a relational database that is still being widely used because of its well known ACID properties namely, atomicity, consistency, integrity and durability. Specifically, the objective of this paper is to highlight the adoption of relational model approaches by bigdata techniques. Towards addressing the reason for this in corporation, this paper qualitatively studied the advancements done over a while on the relational data model. First, the variations in the data storage layout are illustrated based on the needs of the application. Second, quick data retrieval techniques like indexing, query processing and concurrency control methods are revealed. The paper provides vital insights to appraise the efficiency of the structured database in the unstructured environment, particularly when both consistency and scalability become an issue in the working of the hybrid transactional and analytical database management system
Proliferating Cloud Density through Big Data Ecosystem, Novel XCLOUDX Classification and Emergence of as-a-Service Era
Big Data is permeating through the bigger aspect of human life for scientific and commercial dependencies, especially for massive scale data analytics of beyond the exabyte magnitude. As the footprint of Big Data applications is continuously expanding, the reliability on cloud environments is also increasing to obtain appropriate, robust and affordable services to deal with Big Data challenges. Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. Several data models to process Big Data are already developed and a number of such models are still emerging, potentially relying on heterogeneous underlying storage technologies, including cloud computing. In this paper, we investigate the growing role of cloud computing in Big Data ecosystem. Also, we propose a novel XCLOUDX {XCloudX, X…X} classification to zoom in to gauge the intuitiveness of the scientific name of the cloud-assisted NoSQL Big Data models and analyze whether XCloudX always uses cloud computing underneath or vice versa. XCloudX symbolizes those NoSQL Big Data models that embody the term “cloud” in their name, where X is any alphanumeric variable. The discussion is strengthen by a set of important case studies. Furthermore, we study the emergence of as-a-Service era, motivated by cloud computing drive and explore the new members beyond traditional cloud computing stack, developed over the last few years
Data Warehouse Design and Management: Theory and Practice
The need to store data and information permanently, for their reuse in later stages, is a very relevant problem in the modern world and now affects a large number of people and economic agents. The storage and subsequent use of data can indeed be a valuable source for decision making or to increase commercial activity. The next step to data storage is the efficient and effective use of information, particularly through the Business Intelligence, at whose base is just the implementation of a Data Warehouse. In the present paper we will analyze Data Warehouses with their theoretical models, and illustrate a practical implementation in a specific case study on a pharmaceutical distribution companyData warehouse, database, data model.
A Cloud Robotics Solution to Improve Social Assistive Robots for Active and Healthy Aging
Technological innovation in robotics and ICT represents an effective solution to tackle the challenge of providing social sustainable care services for the ageing population. The recent introduction of cloud technologies is opening new opportunities for the provisioning of advanced robotic services based on the cooperation of a number of connected robots, smart environments and devices improved by the huge cloud computational and storage capability. In this context, this paper aims to investigate and assess the potentialities of a cloud robotic system for the provisioning of assistive services for the promotion of active and healthy ageing. The system comprised two different smart environments, located in Italy and Sweden, where a service robot is connected to a cloud platform for the provisioning of localization based services to the users. The cloud robotic services were tested in the two realistic environments to assess the general feasibility of the solution and demonstrate the ability to provide assistive location based services in a multiple environment framework. The results confirmed the validity of the solution but also suggested a deeper investigation on the dependability of the communication technologies adopted in such kind of systems
Data Warehousing Modernization: Big Data Technology Implementation
Considering the challenges posed by Big Data, the cost to scale traditional data warehouses is high and the performances would be inadequate to meet the growing needs of the volume, variety and velocity of data. The Hadoop ecosystem answers both of the shortcomings. Hadoop has the ability to store and analyze large data sets in parallel on a distributed environment but cannot replace the existing data warehouses and RDBMS systems due to its own limitations explained in this paper. In this paper, I identify the reasons why many enterprises fail and struggle to adapt to Big Data technologies. A brief outline of two different technologies to handle Big Data will be presented in this paper: Using IBM’s Pure Data system for analytics (Netezza) usually used in reporting, and Hadoop with Hive which is used in analytics. Also, this paper covers the Enterprise architecture consisting of Hadoop that successful companies are adapting to analyze, filter, process, and store the data running along a massively parallel processing data warehouse. Despite, having the technology to support and process Big Data, industries are still struggling to meet their goals due to the lack of skilled personnel to study and analyze the data, in short data scientists and data statisticians
Scaling Reliably: Improving the Scalability of the Erlang Distributed Actor Platform
Distributed actor languages are an effective means of constructing scalable reliable systems, and the Erlang programming language has a well-established and influential model. While the Erlang model conceptually provides reliable scalability, it has some inherent scalability limits and these force developers to depart from the model at scale. This article establishes the scalability limits of Erlang systems and reports the work of the EU RELEASE project to improve the scalability and understandability of the Erlang reliable distributed actor model.
We systematically study the scalability limits of Erlang and then address the issues at the virtual machine, language, and tool levels. More specifically: (1) We have evolved the Erlang virtual machine so that it can work effectively in large-scale single-host multicore and NUMA architectures. We have made important changes and architectural improvements to the widely used Erlang/OTP release. (2) We have designed and implemented Scalable Distributed (SD) Erlang libraries to address language-level scalability issues and provided and validated a set of semantics for the new language constructs. (3) To make large Erlang systems easier to deploy, monitor, and debug, we have developed and made open source releases of five complementary tools, some specific to SD Erlang.
Throughout the article we use two case studies to investigate the capabilities of our new technologies and tools: a distributed hash table based Orbit calculation and Ant Colony Optimisation (ACO). Chaos Monkey experiments show that two versions of ACO survive random process failure and hence that SD Erlang preserves the Erlang reliability model. While we report measurements on a range of NUMA and cluster architectures, the key scalability experiments are conducted on the Athos cluster with 256 hosts (6,144 cores). Even for programs with no global recovery data to maintain, SD Erlang partitions the network to reduce network traffic and hence improves performance of the Orbit and ACO benchmarks above 80 hosts. ACO measurements show that maintaining global recovery data dramatically limits scalability; however, scalability is recovered by partitioning the recovery data. We exceed the established scalability limits of distributed Erlang, and do not reach the limits of SD Erlang for these benchmarks at this scal
Performance assessment of real-time data management on wireless sensor networks
Technological advances in recent years have allowed the maturity of Wireless Sensor Networks
(WSNs), which aim at performing environmental monitoring and data collection. This sort of
network is composed of hundreds, thousands or probably even millions of tiny smart computers
known as wireless sensor nodes, which may be battery powered, equipped with sensors, a radio
transceiver, a Central Processing Unit (CPU) and some memory. However due to the small size and
the requirements of low-cost nodes, these sensor node resources such as processing power, storage
and especially energy are very limited.
Once the sensors perform their measurements from the environment, the problem of data
storing and querying arises. In fact, the sensors have restricted storage capacity and the on-going
interaction between sensors and environment results huge amounts of data. Techniques for data
storage and query in WSN can be based on either external storage or local storage. The external
storage, called warehousing approach, is a centralized system on which the data gathered by the
sensors are periodically sent to a central database server where user queries are processed. The
local storage, in the other hand called distributed approach, exploits the capabilities of sensors
calculation and the sensors act as local databases. The data is stored in a central database server
and in the devices themselves, enabling one to query both.
The WSNs are used in a wide variety of applications, which may perform certain operations on
collected sensor data. However, for certain applications, such as real-time applications, the sensor
data must closely reflect the current state of the targeted environment. However, the environment
changes constantly and the data is collected in discreet moments of time. As such, the collected
data has a temporal validity, and as time advances, it becomes less accurate, until it does not
reflect the state of the environment any longer. Thus, these applications must query and analyze
the data in a bounded time in order to make decisions and to react efficiently, such as industrial
automation, aviation, sensors network, and so on. In this context, the design of efficient real-time
data management solutions is necessary to deal with both time constraints and energy consumption.
This thesis studies the real-time data management techniques for WSNs. It particularly it focuses
on the study of the challenges in handling real-time data storage and query for WSNs and on the
efficient real-time data management solutions for WSNs.
First, the main specifications of real-time data management are identified and the available
real-time data management solutions for WSNs in the literature are presented. Secondly, in order to
provide an energy-efficient real-time data management solution, the techniques used to manage
data and queries in WSNs based on the distributed paradigm are deeply studied. In fact, many
research works argue that the distributed approach is the most energy-efficient way of managing
data and queries in WSNs, instead of performing the warehousing. In addition, this approach can provide quasi real-time query processing because the most current data will be retrieved from the
network.
Thirdly, based on these two studies and considering the complexity of developing, testing, and
debugging this kind of complex system, a model for a simulation framework of the real-time
databases management on WSN that uses a distributed approach and its implementation are
proposed. This will help to explore various solutions of real-time database techniques on WSNs
before deployment for economizing money and time. Moreover, one may improve the proposed
model by adding the simulation of protocols or place part of this simulator on another available
simulator. For validating the model, a case study considering real-time constraints as well as energy
constraints is discussed.
Fourth, a new architecture that combines statistical modeling techniques with the distributed
approach and a query processing algorithm to optimize the real-time user query processing are
proposed. This combination allows performing a query processing algorithm based on admission
control that uses the error tolerance and the probabilistic confidence interval as admission
parameters. The experiments based on real world data sets as well as synthetic data sets
demonstrate that the proposed solution optimizes the real-time query processing to save more
energy while meeting low latency.Fundação para a Ciência e Tecnologi
Recommended from our members
Kinesthetics eXtreme: An External Infrastructure for Monitoring Distributed Legacy Systems
Autonomic computing - self-configuring, self-healing, self-optimizing applications, systems and networks - is widely believed to be a promising solution to ever-increasing system complexity and the spiraling costs of human system management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software constructed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build 'systems of systems' that draw from a gamut of new and legacy components involving disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand or modify the code, and in many cases even when it is impossible to recompile. We present a meta-architecture implemented as active middleware infrastructure to explicitly add autonomic services via an attached feedback loop that provides continual monitoring and, as needed, reconfiguration and/or repair. Our lightweight design and separation of concerns enables easy adoption of individual components, as well as the full infrastructure, for use with a large variety of legacy, new systems, and systems of systems. We summarize several experiments spanning multiple domains
- …