14 research outputs found

    Clustering big urban data sets

    Get PDF
    Cities are producing and collecting massive amount of data from various sources such as transportation network, energy sector, smart homes, tax records, surveys, LIDAR data, mobile phones sensors etc. All of the aforementioned data, when connected via the Internet, fall under the Internet of Things (IoT) category. To use such a large volume of data for potential scientific computing benefits, it is important to store and analyze such amount of urban data using efficient computing resources and algorithms. However, this can be problematic due to many challenges. This article explores some of these challenges and test the performance of two partitional algorithms for clustering Big Urban Datasets, namely: the K-Means vs. the Fuzzy cMean (FCM). Clustering Big Urban Data in compact format represents the information of the whole data and this can benefit researchers to deal with this reorganized data much efficiently. Our experiments conclude that FCM outperformed the K-Means when presented with such type of dataset, however the later is lighter on the hardware utilisations

    A Scalable Machine Learning Online Service for Big Data Real-Time Analysis

    Get PDF
    Proceedings of: IEEE Symposium Series on Computational Intelligence (SSCI 2014). Orlando, FL, USA, December 09-12, 2014.This work describes a proposal for developing and testing a scalable machine learning architecture able to provide real-time predictions or analytics as a service over domain-independent big data, working on top of the Hadoop ecosystem and providing real-time analytics as a service through a RESTful API. Systems implementing this architecture could provide companies with on-demand tools facilitating the tasks of storing, analyzing, understanding and reacting to their data, either in batch or stream fashion; and could turn into a valuable asset for improving the business performance and be a key market differentiator in this fast pace environment. In order to validate the proposed architecture, two systems are developed, each one providing classical machine-learning services in different domains: the first one involves a recommender system for web advertising, while the second consists in a prediction system which learns from gamers' behavior and tries to predict future events such as purchases or churning. An evaluation is carried out on these systems, and results show how both services are able to provide fast responses even when a number of concurrent requests are made, and in the particular case of the second system, results clearly prove that computed predictions significantly outperform those obtained if random guess was used.This research work is part of Memento Data Analysis project, co-funded by the Spanish Ministry of Industry, Energy and Tourism with identifier TSI-020601-2012-99.Publicad

    Performance characterization and analysis for Hadoop K-means iteration

    Get PDF

    A Cloud-Based Framework for Machine Learning Workloads and Applications

    Get PDF
    [EN] In this paper we propose a distributed architecture to provide machine learning practitioners with a set of tools and cloud services that cover the whole machine learning development cycle: ranging from the models creation, training, validation and testing to the models serving as a service, sharing and publication. In such respect, the DEEP-Hybrid-DataCloud framework allows transparent access to existing e-Infrastructures, effectively exploiting distributed resources for the most compute-intensive tasks coming from the machine learning development cycle. Moreover, it provides scientists with a set of Cloud-oriented services to make their models publicly available, by adopting a serverless architecture and a DevOps approach, allowing an easy share, publish and deploy of the developed models.This work was supported by the project DEEP-Hybrid-DataCloud ``Designing and Enabling E-infrastructures for intensive Processing in a Hybrid DataCloud'' that has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant 777435Lopez Garcia, A.; Marco De Lucas, J.; Antonacci, M.; Zu Castell, W.; David, M.; Hardt, M.; Lloret Iglesias, L.... (2020). A Cloud-Based Framework for Machine Learning Workloads and Applications. IEEE Access. 8:18681-18692. https://doi.org/10.1109/ACCESS.2020.2964386S1868118692

    Performance Evaluation of an Independent Time Optimized Infrastructure for Big Data Analytics that Maintains Symmetry

    Get PDF
    Traditional data analytics tools are designed to deal with the asymmetrical type of data i.e., structured, semi-structured, and unstructured. The diverse behavior of data produced by different sources requires the selection of suitable tools. The restriction of recourses to deal with a huge volume of data is a challenge for these tools, which affects the performances of the tool's execution time. Therefore, in the present paper, we proposed a time optimization model, shares common HDFS (Hadoop Distributed File System) between three Name-node (Master Node), three Data-node, and one Client-node. These nodes work under the DeMilitarized zone (DMZ) to maintain symmetry. Machine learning jobs are explored from an independent platform to realize this model. In the first node (Name-node 1), Mahout is installed with all machine learning libraries through the maven repositories. The second node (Name-node 2), R connected to Hadoop, is running through the shiny-server. Splunk is configured in the third node (Name-node 3) and is used to analyze the logs. Experiments are performed between the proposed and legacy model to evaluate the response time, execution time, and throughput. K-means clustering, Navies Bayes, and recommender algorithms are run on three different data sets, i.e., movie rating, newsgroup, and Spam SMS data set, representing structured, semi-structured, and unstructured data, respectively. The selection of tools defines data independence, e.g., Newsgroup data set to run on Mahout as others cannot be compatible with this data. It is evident from the outcome of the data that the performance of the proposed model establishes the hypothesis that our model overcomes the limitation of the resources of the legacy model. In addition, the proposed model can process any kind of algorithm on different sets of data, which resides in its native formats

    Architecture and implementation of a scalable sensor data storage and analysis system using cloud computing and big data technologies

    Get PDF
    Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis

    Performance analysis and optimization of a distributed processing framework for data mining accelerated with graphics processing units

    Get PDF
    In this age, a huge amount of data is generated every day by human interactions with services. Discovering the patterns of these data are very important to take business decisions. Due to the size of this data, it requires very high intensive computation power. Thus, many frameworks have been developed using Central Processing Units (CPU) implementations to perform this computation. For instance, a distributed and parallel programming model such as Google's MapReduce. On the other hand, since the last half decade, researchers have started using Graphics Processing Units (GPU) performance to process these huge data. Unlike CPU, GPU can execute many tasks in parallel. To measure the performance of GPU, EURA NOVA implemented two data mining algorithms (K-Means and Naive Bayes) in the framework to enable task execution in a distributed manner by considering availability of GPU power in each node. Even though the framework was successfully implemented, when compared to another CPU parallel framework, its performance was very poor. It shows that the framework does not use the performance of GPU effectively. Moreover, it contradicts with the fact that GPU can execute many tasks in parallel and thus, faster than CPU implementation. As a result, this research topic started with the objective to answer how to improve this performance. Specifically, to improve the performance of the K-Means implementation. We also included a new data mining implementation called Expectation Maximization to the framework, taking advantage of each GPU node and the distribution nodes. Furthermore, we address some good practices when implementing data mining in GPU from a sequential design. Working with general purpose GPU is still in development stage. A well known library is Thrust. We used it to achieve the above objectives. Finally, we evaluated our solutions by comparing with other existed CPU frameworks. The results show that we improved the K-Means performance more than 130x, and plugged the expectation maximization implementation into EURA NOVA's framework
    corecore