2,158 research outputs found

    A NoSQL-Based Framework for Managing Home Services

    Get PDF
    Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach

    Efficient HTTP based I/O on very large datasets for high performance computing with the libdavix library

    Full text link
    Remote data access for data analysis in high performance computing is commonly done with specialized data access protocols and storage systems. These protocols are highly optimized for high throughput on very large datasets, multi-streams, high availability, low latency and efficient parallel I/O. The purpose of this paper is to describe how we have adapted a generic protocol, the Hyper Text Transport Protocol (HTTP) to make it a competitive alternative for high performance I/O and data analysis applications in a global computing grid: the Worldwide LHC Computing Grid. In this work, we first analyze the design differences between the HTTP protocol and the most common high performance I/O protocols, pointing out the main performance weaknesses of HTTP. Then, we describe in detail how we solved these issues. Our solutions have been implemented in a toolkit called davix, available through several recent Linux distributions. Finally, we describe the results of our benchmarks where we compare the performance of davix against a HPC specific protocol for a data analysis use case.Comment: Presented at: Very large Data Bases (VLDB) 2014, Hangzho

    Internet of Things Cloud: Architecture and Implementation

    Full text link
    The Internet of Things (IoT), which enables common objects to be intelligent and interactive, is considered the next evolution of the Internet. Its pervasiveness and abilities to collect and analyze data which can be converted into information have motivated a plethora of IoT applications. For the successful deployment and management of these applications, cloud computing techniques are indispensable since they provide high computational capabilities as well as large storage capacity. This paper aims at providing insights about the architecture, implementation and performance of the IoT cloud. Several potential application scenarios of IoT cloud are studied, and an architecture is discussed regarding the functionality of each component. Moreover, the implementation details of the IoT cloud are presented along with the services that it offers. The main contributions of this paper lie in the combination of the Hypertext Transfer Protocol (HTTP) and Message Queuing Telemetry Transport (MQTT) servers to offer IoT services in the architecture of the IoT cloud with various techniques to guarantee high performance. Finally, experimental results are given in order to demonstrate the service capabilities of the IoT cloud under certain conditions.Comment: 19pages, 4figures, IEEE Communications Magazin

    ShoalUp: Development of a meeting Android application

    Get PDF
    The project aims to develop an Android application for putting together people with similar interests. The users will register and be able to create “meetings” or “events”, informing potential attendants with the description, location and date and time. On the other hand, the users will also be able to discover this “events” and join them and talk with the people that are assisting. The developed Android application allows people to scan a certain range from the user position discovering potential meetups, filter by kind of activity, i.e. sports, party, trips… and chose to join them or comment in the event. The app implements a client-server structure, using REST services to communicate, and a NoSQL management database system. Every decision taken, from the OS to the technology stack selected will be analyzed and justified and put in context, describing the available technologies that could fit in this purpose and reasoning the election. Also, this being a big personal project, several future features are mentioned with the correspondent description and selection reasons.El objetivo del proyecto es desarrollar una aplicación Android para poner en contacto personas con intereses similares. Los usuarios se registrarán y podrán crear “meetings” o “eventos”, informando a a los potenciales asistentes con la descripción, localización y fecha y hora. Otros usuarios podrán descubrir estos “eventos” y unirse pudiendo comunicarse con el resto de asistentes. La aplicación Android desarrollada permite escanear un rango concreto desde la posición del usuario descubriendo posibles “quedadas”, filtrándolas por actividad, i.e. deportes, fiesta, viajes… y elegir unirse o dejar un comentario en el evento. La aplicación implementa una estructura cliente-servidor, usando servicios REST para la comunicación, y un sistema de gestión de bases de datos NoSQL. Todas las decisiones tomadas, desde el sistema operativo hasta el stack tecnológico escogido serán analizadas y justificadas y puestas en contexto, describiendo las tecnologías disponibles que podrían encajar para este propósito y razonando la elección. También, al ser un gran proyecto personal, diferentes características futuras serán mencionadas con su correspondiente descripción y razones de selección.Ingeniería Informátic

    Performance Evaluation of Structured and Unstructured Data in PIG/HADOOP and MONGO-DB Environments

    Get PDF
    The exponential development of data initially exhibited difficulties for prominent organizations, for example, Google, Yahoo, Amazon, Microsoft, Facebook, Twitter and so forth. The size of the information that needs to be handled by cloud applications is developing significantly quicker than storage capacity. This development requires new systems for managing and breaking down data. The term Big Data is used to address large volumes of unstructured (or semi-structured) and structured data that gets created from different applications, messages, weblogs, and online networking. Big Data is data whose size, variety and uncertainty require new supplementary models, procedures, algorithms, and research to manage and extract value and concealed learning from it. To process more information efficiently and skillfully, for analysis parallelism is utilized. To deal with the unstructured and semi-structured information NoSQL database has been presented. Hadoop better serves the Big Data analysis requirements. It is intended to scale up starting from a single server to a large cluster of machines, which has a high level of adaptation to internal failure. Many business and research institutes such as Facebook, Yahoo, Google, and so on had an expanding need to import, store, and analyze dynamic semi-structured data and its metadata. Also, significant development of semi-structured data inside expansive web-based organizations has prompted the formation of NoSQL data collections for flexible sorting and MapReduce for adaptable parallel analysis. They assessed, used and altered Hadoop, the most popular open source execution of MapReduce, for tending to the necessities of various valid analytics problems. These institutes are also utilizing MongoDB, and a report situated NoSQL store. In any case, there is a limited comprehension of the execution trade-offs of using these two innovations. This paper assesses the execution, versatility, and adaptation to an internal failure of utilizing MongoDB and Hadoop, towards the objective of recognizing the correct programming condition for logical data analytics and research. Lately, an expanding number of organizations have developed diverse, distinctive kinds of non-relational databases (such as MongoDB, Cassandra, Hypertable, HBase/ Hadoop, CouchDB and so on), generally referred to as NoSQL databases. The enormous amount of information generated requires an effective system to analyze the data in various scenarios, under various breaking points. In this paper, the objective is to find the break-even point of both Hadoop/Pig and MongoDB and develop a robust environment for data analytics
    corecore