2,961 research outputs found

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Hadoop Performance Analysis Model with Deep Data Locality

    Get PDF
    Background: Hadoop has become the base framework on the big data system via the simple concept that moving computation is cheaper than moving data. Hadoop increases a data locality in the Hadoop Distributed File System (HDFS) to improve the performance of the system. The network traffic among nodes in the big data system is reduced by increasing a data-local on the machine. Traditional research increased the data-local on one of the MapReduce stages to increase the Hadoop performance. However, there is currently no mathematical performance model for the data locality on the Hadoop. Methods: This study made the Hadoop performance analysis model with data locality for analyzing the entire process of MapReduce. In this paper, the data locality concept on the map stage and shuffle stage was explained. Also, this research showed how to apply the Hadoop performance analysis model to increase the performance of the Hadoop system by making the deep data locality. Results: This research proved the deep data locality for increasing performance of Hadoop via three tests, such as, a simulation base test, a cloud test and a physical test. According to the test, the authors improved the Hadoop system by over 34% by using the deep data locality. Conclusions: The deep data locality improved the Hadoop performance by reducing the data movement in HDFS

    Deep Learning Relevance: Creating Relevant Information (as Opposed to Retrieving it)

    Full text link
    What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single document? We present a preliminary study that makes a first step towards answering this question. Given a query, we train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all.Comment: Neu-IR '16 SIGIR Workshop on Neural Information Retrieval, July 21, 2016, Pisa, Ital

    Evaluating the more suitable ISM frequency band for iot-based smart grids: a quantitative study of 915 MHz vs. 2400 MHz

    Get PDF
    IoT has begun to be employed pervasively in industrial environments and critical infrastructures thanks to its positive impact on performance and efficiency. Among these environments, the Smart Grid (SG) excels as the perfect host for this technology, mainly due to its potential to become the motor of the rest of electrically-dependent infrastructures. To make this SG-oriented IoT cost-effective, most deployments employ unlicensed ISM bands, specifically the 2400 MHz one, due to its extended communication bandwidth in comparison with lower bands. This band has been extensively used for years by Wireless Sensor Networks (WSN) and Mobile Ad-hoc Networks (MANET), from which the IoT technologically inherits. However, this work questions and evaluates the suitability of such a "default" communication band in SG environments, compared with the 915 MHz ISM band. A comprehensive quantitative comparison of these bands has been accomplished in terms of: power consumption, average network delay, and packet reception rate. To allow such a study, a dual-band propagation model specifically designed for the SG has been derived, tested, and incorporated into the well-known TOSSIM simulator. Simulation results reveal that only in the absence of other 2400 MHz interfering devices (such as WiFi or Bluetooth) or in small networks, is the 2400 MHz band the best option. In any other case, SG-oriented IoT quantitatively perform better if operating in the 915 MHz band.This research was supported by the MINECO/FEDER project grants TEC2013-47016-C2-2-R (COINS) and TEC2016-76465-C2-1-R (AIM). The authors would like to thank Juan Salvador Perez Madrid nd Domingo Meca (part of the Iberdrola staff) for the support provided during the realization of this work. Ruben M. Sandoval also thanks the Spanish MICINN for an FPU (REF FPU14/03424) pre-doctoral fellowship

    Addressing the Challenges in Federating Edge Resources

    Full text link
    This book chapter considers how Edge deployments can be brought to bear in a global context by federating them across multiple geographic regions to create a global Edge-based fabric that decentralizes data center computation. This is currently impractical, not only because of technical challenges, but is also shrouded by social, legal and geopolitical issues. In this chapter, we discuss two key challenges - networking and management in federating Edge deployments. Additionally, we consider resource and modeling challenges that will need to be addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and Paradigms; Editors Buyya, Sriram

    Big Data Analytics for QoS Prediction Through Probabilistic Model Checking

    Get PDF
    As competitiveness increases, being able to guaranting QoS of delivered services is key for business success. It is thus of paramount importance the ability to continuously monitor the workflow providing a service and to timely recognize breaches in the agreed QoS level. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects. In this paper we propose a model checking based approach to predict QoS of a formally described process. The continous model checking is enabled by the usage of a parametrized model of the monitored system, where the actual value of parameters is continuously evaluated and updated by means of big data tools. The paper also describes a prototype implementation of the approach and shows its usage in a case study.Comment: EDCC-2014, BIG4CIP-2014, Big Data Analytics, QoS Prediction, Model Checking, SLA compliance monitorin
    • …
    corecore