404 research outputs found

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms

    Get PDF
    Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version

    A 5G-based eHealth monitoring and emergency response system: experience and lessons learned

    Get PDF
    5G is being deployed in major cities across the globe. Although the benefits brought by the new 5G air interface will be numerous, 5G is more than just an evolution of radio technology. New concepts, such as the application of network softwarization and programmability paradigms to the overall network design, the reduced latency promised by edge computing, or the concept of network slicing – just to cite some of them – will open the door to new vertical-specific services, even capable of saving more lives. This article discusses the implementation and validation of an eHealth service specially tailored for the Emergency Services of the Madrid Municipality. This new vertical application makes use of the novel characteristics of 5G, enabling dynamic instantiation of services at the edge, a federation of domains and execution of real on-the-field augmented reality. The article provides an explanation of the design of the use case and its real-life implementation and demonstration in collaboration with the Madrid emergency response team. The major outcome of this work is a real-life proof-of-concept of this system, which can reduce the time required to respond to an emergency in minutes and perform more efficient triage, increasing the chances of saving lives.This work was supported in part by the EU H2020 5GROWTH Project under Grant 856709, in part by Madrid Government (Comunidad de Madrid-Spain) through the Multiannual Agreement with University Carlos III of Madrid UC3M in the line of Excellence of University Professors under Grant EPUC3M21, and in part by the Regional Program of Research and Technological Innovation (V PRICIT)

    Optimization and Prediction Techniques for Self-Healing and Self-Learning Applications in a Trustworthy Cloud Continuum

    Get PDF
    The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure.This research was funded by the European project PIACERE (Horizon 2020 research and innovation Program, under grant agreement no 101000162)

    The Many Faces of Edge Intelligence

    Get PDF
    Edge Intelligence (EI) is an emerging computing and communication paradigm that enables Artificial Intelligence (AI) functionality at the network edge. In this article, we highlight EI as an emerging and important field of research, discuss the state of research, analyze research gaps and highlight important research challenges with the objective of serving as a catalyst for research and innovation in this emerging area. We take a multidisciplinary view to reflect on the current research in AI, edge computing, and communication technologies, and we analyze how EI reflects on existing research in these fields. We also introduce representative examples of application areas that benefit from, or even demand the use of EI.Peer reviewe

    Explorando ferramentas de modelação digital, aumentada e orientada por dados em engenharia e design de produto

    Get PDF
    Tools are indispensable for all diligent professional practice. New concepts and possibilities for paradigm shifting are emerging with recent computational technological developments in digital tools. However, new tools from key concepts such as “Big-Data”, “Accessibility” and “Algorithmic Design” are fundamentally changing the input and position of the Product Engineer and Designer. After the context introduction, this dissertation document starts by extracting three pivotal criteria from the Product Design Engineering's State of the Art analysis. In each one of those criteria the new emergent, more relevant and paradigmatic concepts are explored and later on are positioned and compared within the Product Lifecycle Management wheel scheme, where the potential risks and gaps are pointed to be explored in the experience part. There are two types of empirical experiences: the first being of case studies from Architecture and Urban Planning — from the student's professional experience —, that served as a pretext and inspiration for the experiments directly made for Product Design Engineering. First with a set of isolated explorations and analysis, second with a hypothetical experience derived from the latter and, finally, a deliberative section that culminate in a listing of risks and changes concluded from all the previous work. The urgency to reflect on what will change in that role and position, what kind of ethical and/or conceptual reformulations should exist for the profession to maintain its intellectual integrity and, ultimately, to survive, are of the utmost evidence.As ferramentas são indispensáveis para toda a prática diligente profissional. Novos conceitos e possibilidades de mudança de paradigma estão a surgir com os recentes progressos tecnológicos a nível computacional nas ferramentas digitais. Contudo, novas ferramentas originadas sobre conceitos-chave como “Big Data”, “Acessibilidade” e “Design Algorítmico” estão a mudar de forma fundamental o contributo e posição do Engenheiro e Designer de Produto. Esta dissertação, após uma primeira introdução contextual, começa por extrair três conceitos-eixo duma análise ao Estado da Arte actual em Engenharia e Design de Produto. Em cada um desses conceitos explora-se os novos conceitos emergentes mais relevantes e paradigmáticos, que então são comparados e posicionados no círculo de Gestão de Ciclo de Vida de Produto, apontando aí potenciais riscos e falhas que possam ser explorados em experiências. As experiências empíricas têm duas índoles: a primeira de projetos e casos de estudo de arquitetura e planeamento urbanístico — experiência em contexto de trabalho do aluno —, que serviu de pretexto e inspiração para as experiências relacionadas com Engenharia e Design de Produto. Primeiro com uma série de análises e experiências isoladas, segundo com uma formulação hipotética com o compêndio dessas experiências e, finalmente, com uma secção de reflexão que culmina numa série de riscos e mudanças induzidas do trabalho anterior. A urgência em refletir sobre o que irá alterar nesse papel e posição, que género de reformulações éticas e/ou conceptuais deverão existir para que a profissão mantenha a sua integridade intelectual e, em última instância, sobreviva, são bastante evidentes.Mestrado em Engenharia e Design de Produt
    corecore