32,324 research outputs found

    State-Of-The-Art Convolutional Neural Networks for Smart Farms: A Review

    Get PDF
    International audienceFarming has seen a number of technological transformations in the last decade, becoming more industrialized and technology-driven. This means use of Internet of Things(IoT), Cloud Computing(CC), Big Data (BD) and automation to gain better control over the process of farming. As the use of these technologies in farms has grown exponentially with massive data production, there is need to develop and use state-of-the-art tools in order to gain more insight from the data within reasonable time. In this paper, we present an initial understanding of Convolutional Neural Network (CNN), the recent architectures of state-of-the-art CNN and their underlying complexities. Then we propose a classification taxonomy tailored for agricultural application of CNN. Finally , we present a comprehensive review of research dedicated to applications of state-of-the-art CNNs in agricultural production systems. Our contribution is in twofold. First, for end users of agricultural deep learning tools, our benchmarking finding can serve as a guide to selecting appropriate architecture to use. Second, for agricultural software developers of deep learning tools, our in-depth analysis explains the state-of-the-art CNN complexities and points out possible future directions to further optimize the running performance

    Optimum Selection of DNN Model and Framework for Edge Inference

    Get PDF
    This paper describes a methodology to select the optimum combination of deep neuralnetwork and software framework for visual inference on embedded systems. As a first step, benchmarkingis required. In particular, we have benchmarked six popular network models running on four deep learningframeworks implemented on a low-cost embedded platform. Three key performance metrics have beenmeasured and compared with the resulting 24 combinations: accuracy, throughput, and power consumption.Then, application-level specifications come into play. We propose a figure of merit enabling the evaluationof each network/framework pair in terms of relative importance of the aforementioned metrics for a targetedapplication. We prove through numerical analysis and meaningful graphical representations that only areduced subset of the combinations must actually be considered for real deployment. Our approach can beextended to other networks, frameworks, and performance parameters, thus supporting system-level designdecisions in the ever-changing ecosystem of embedded deep learning technology.Ministerio de Economía y Competitividad (TEC2015-66878-C3-1-R)Junta de Andalucía (TIC 2338-2013)European Union Horizon 2020 (Grant 765866

    Introduction to Deep Learning: a practical point of view

    Full text link
    [EN] Deep Learning is currently used for numerous Artificial Intelligence applications, especially in the computer vision field for image classification and recognition tasks. Thanks to the increasing popularity, several tools have been created to take advantage of the potential benefits of this new technology. Although there is already a wide range of available benchmarks which offer evaluations of hardware architectures and Deep Learning software tools, these projects do not usually deal with specific performance aspects and they do not consider a complete set of popular models and datasets at the same time. Moreover, valuable metrics, such as GPU memory and power usage, are not typically measured and efficiently compared for a deeper analysis. This report aims to provide a complete and overall discussion about the recent progress of Deep Learning techniques, by evaluating various hardware platforms and by highlighting the key trends of the main Deep Learning frameworks. It will also review the most popular development tools that allow users to get started in this field and it will underline important benchmarking metrics and designs that should be used for the evaluation of the increasing number of Deep Learning projects. Furthermore, the data obtained by the comparison and the testing results will be shown in this work in order to assess the performance of the Deep Learning environments examined. The reader will also deepen the following points in the next pages: a general Deep Learning study to acquire the main state-of-the-art concepts of the subject; an attentive examination of benchmarking methods and standards for the evaluation of intelligent environments; the personal approach and the related project realised to carry out experiments and tests; interesting considerations extracted and discussed by the obtained results.[ES] Los sistemas de aprendizaje automático se han popularizado en los últimos tiempos gracias a la disponibilidad de aceleradores de cómputo tales como tarjetas gráficas (Graphics Processing Units, o GPUs). Estos sistemas de aprendizaje automático hacen uso de técnicas de inteligencia artificial conocidas como Deep Learning (aprendizaje profundo). Las técnicas de Deep Learning están basadas en las redes neuronales. Este tipo de redes, que necesitan de grandes potencias de cómputo, no son nuevas sino que ya se conocen desde hace bastantes años. En este sentido, el uso de estas redes neuronales se ha acelerado en los últimos años gracias al uso de GPUs. Estos sistemas de aprendizaje permiten a los ordenadores realizar tareas como reconocimiento automático de imágenes. En un plano más industrial, estos sistemas son, por ejemplo, una de las bases sobre las que se asientan los coches autónomos, tan presentes en las noticias durante los últimos meses. En resumen, el uso de la tecnología Deep Learning junto con el uso masivo de GPUs compatibles con CUDA ha posibilitado un salto cualitativo importante en muchas áreas. Sin embargo, dada su reciente aparición, y dado también que en los 4 años del grado hay que seleccionar muy cuidadosamente las enseñanzas a impartir, el plan de estudios actual no profundiza en estas tecnologías en la medida que a algunos alumnos les gustaría. En este TFG se propone al alumno llevar a cabo una exploración a nivel práctico de este tipo de tecnologías. Para ello, el alumno instalará y usará diversos entornos de Deep Learning y también hará una posterior evaluación de prestaciones en diferentes sistemas hardware basados en GPUs. Para ello usará un clúster equipado con GPUs de última generación. Este clúster cuenta con nodos que incluyen varias GPUs.Piccione, A. (2018). Introduction to Deep Learning: a practical point of view. http://hdl.handle.net/10251/107792TFG

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research
    corecore