46 research outputs found

    Armstrong Flight Research Center Research Technology and Engineering Report 2015

    Get PDF
    I am honored to endorse the 2015 Neil A. Armstrong Flight Research Centers Research, Technology, and Engineering Report. The talented researchers, engineers, and scientists at Armstrong are continuing a long, rich legacy of creating innovative approaches to solving some of the difficult problems and challenges facing NASA and the aerospace community.Projects at NASA Armstrong advance technologies that will improve aerodynamic efficiency, increase fuel economy, reduce emissions and aircraft noise, and enable the integration of unmanned aircraft into the national airspace. The work represented in this report highlights the Centers agility to develop technologies supporting each of NASAs core missions and, more importantly, technologies that are preparing us for the future of aviation and space exploration.We are excited about our role in NASAs mission to develop transformative aviation capabilities and open new markets for industry. One of our key strengths is the ability to rapidly move emerging techniques and technologies into flight evaluation so that we can quickly identify their strengths, shortcomings, and potential applications.This report presents a brief summary of the technology work of the Center. It also contains contact information for the associated technologists responsible for the work. Dont hesitate to contact them for more information or for collaboration ideas

    Digital supply chain transformation: emerging technologies for sustainable growth

    Get PDF
    This book shows how organisations can leverage emerging digital technologies to achieve operational effectiveness, build new capabilities, and develop innovative business models to underpin transformation in their supply chains. The contributing authors provide deep insights not only on how these emerging technologies work, but also how to use them to create value for multiple stakeholders and deliver sustainable supply chain outcomes. The book will be of great value to practitioners, students and academics who want to learn about state-of-the-art digital developments in the supply chain field. It helps readers to appreciate how various digital paradigms and tools can be used to create innovative products and services with supply chain viability. It also develops the reader’s critical ability to assess the range of technological solutions used to address contemporary supply chain issues and problems

    Analysing and Reducing Costs of Deep Learning Compiler Auto-tuning

    Get PDF
    Deep Learning (DL) is significantly impacting many industries, including automotive, retail and medicine, enabling autonomous driving, recommender systems and genomics modelling, amongst other applications. At the same time, demand for complex and fast DL models is continually growing. The most capable models tend to exhibit highest operational costs, primarily due to their large computational resource footprint and inefficient utilisation of computational resources employed by DL systems. In an attempt to tackle these problems, DL compilers and auto-tuners emerged, automating the traditionally manual task of DL model performance optimisation. While auto-tuning improves model inference speed, it is a costly process, which limits its wider adoption within DL deployment pipelines. The high operational costs associated with DL auto-tuning have multiple causes. During operation, DL auto-tuners explore large search spaces consisting of billions of tensor programs, to propose potential candidates that improve DL model inference latency. Subsequently, DL auto-tuners measure candidate performance in isolation on the target-device, which constitutes the majority of auto-tuning compute-time. Suboptimal candidate proposals, combined with their serial measurement in an isolated target-device lead to prolonged optimisation time and reduced resource availability, ultimately reducing cost-efficiency of the process. In this thesis, we investigate the reasons behind prolonged DL auto-tuning and quantify their impact on the optimisation costs, revealing directions for improved DL auto-tuner design. Based on these insights, we propose two complementary systems: Trimmer and DOPpler. Trimmer improves tensor program search efficacy by filtering out poorly performing candidates, and controls end-to-end auto-tuning using cost objectives, monitoring optimisation cost. Simultaneously, DOPpler breaks long-held assumptions about the serial candidate measurements by successfully parallelising them intra-device, with minimal penalty to optimisation quality. Through extensive experimental evaluation of both systems, we demonstrate that they significantly improve cost-efficiency of autotuning (up to 50.5%) across a plethora of tensor operators, DL models, auto-tuners and target-devices

    Design and implementation of a telemetry platform for high-performance computing environments

    Get PDF
    A new generation of high-performance and distributed computing applications and services rely on adaptive and dynamic architectures and execution strategies to run efficiently, resiliently, and at scale in today’s HPC environments. These architectures require insights into their execution behaviour and the state of their execution environment at various levels of detail, in order to make context-aware decisions. HPC telemetry provides this information. It describes the continuous stream of time series and event data that is generated on HPC systems by the hardware, operating systems, services, runtime systems, and applications. Current HPC ecosystems do not provide the conceptual models, infrastructure, and interfaces to collect, store, analyse, and integrate telemetry in a structured and efficient way. Consequently, applications and services largely depend on one-off solutions and custom-built technologies to achieve these goals; introducing significant development overheads that inhibit portability and mobility. To facilitate a broader mix of applications, more efficient application development, and swift adoption of adaptive architectures in production, a comprehensive framework for telemetry management and analysis must be provided as part of future HPC ecosystem designs. This thesis provides the blueprint for such a framework: it proposes a new approach to telemetry management in HPC: the Telemetry Platform concept. Departing from the observation that telemetry data and the corresponding analysis, and integration pat- terns on modern multi-tenant HPC systems have a lot of similarities to the patterns observed in large-scale data analytics or “Big Data” platforms, the telemetry platform concept takes the data platform paradigm and architectural approach and applies them to HPC telemetry. The result is the blueprint for a system that provides services for storing, searching, analysing, and integrating telemetry data in HPC applications and other HPC system services. It allows users to create and share telemetry data-driven insights using everything from simple time-series analysis to complex statistical and machine learning models while at the same time hiding many of the inherent complexities of data management such as data transport, clean-up, storage, cataloguing, access management, and providing appropriate and scalable analytics and integration capabilities. The main contributions of this research are (1) the application of the data platform concept to HPC telemetry data management and usage; (2) a graph-based, time-variant telemetry data model that captures structures and properties of platform and applications and in which telemetry data can be organized; (3) an architecture blueprint and prototype of a concrete implementation and integration architecture of the telemetry platform; and (4) a proposal for decoupled HPC application architectures, separating telemetry data management, and feedback-control-loop logic from the core application code. First experimental results with the prototype implementation suggest that the telemetry platform paradigm can reduce overhead and redundancy in the development of telemetry-based application architectures, and lower the barrier for HPC systems research and the provisioning of new, innovative HPC system services

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Next Generation Internet of Things – Distributed Intelligence at the Edge and Human-Machine Interactions

    Get PDF
    This book provides an overview of the next generation Internet of Things (IoT), ranging from research, innovation, development priorities, to enabling technologies in a global context. It is intended as a standalone in a series covering the activities of the Internet of Things European Research Cluster (IERC), including research, technological innovation, validation, and deployment.The following chapters build on the ideas put forward by the European Research Cluster, the IoT European Platform Initiative (IoT–EPI), the IoT European Large-Scale Pilots Programme and the IoT European Security and Privacy Projects, presenting global views and state-of-the-art results regarding the next generation of IoT research, innovation, development, and deployment.The IoT and Industrial Internet of Things (IIoT) are evolving towards the next generation of Tactile IoT/IIoT, bringing together hyperconnectivity (5G and beyond), edge computing, Distributed Ledger Technologies (DLTs), virtual/ andaugmented reality (VR/AR), and artificial intelligence (AI) transformation.Following the wider adoption of consumer IoT, the next generation of IoT/IIoT innovation for business is driven by industries, addressing interoperability issues and providing new end-to-end security solutions to face continuous treats.The advances of AI technology in vision, speech recognition, natural language processing and dialog are enabling the development of end-to-end intelligent systems encapsulating multiple technologies, delivering services in real-time using limited resources. These developments are focusing on designing and delivering embedded and hierarchical AI solutions in IoT/IIoT, edge computing, using distributed architectures, DLTs platforms and distributed end-to-end security, which provide real-time decisions using less data and computational resources, while accessing each type of resource in a way that enhances the accuracy and performance of models in the various IoT/IIoT applications.The convergence and combination of IoT, AI and other related technologies to derive insights, decisions and revenue from sensor data provide new business models and sources of monetization. Meanwhile, scalable, IoT-enabled applications have become part of larger business objectives, enabling digital transformation with a focus on new services and applications.Serving the next generation of Tactile IoT/IIoT real-time use cases over 5G and Network Slicing technology is essential for consumer and industrial applications and support reducing operational costs, increasing efficiency and leveraging additional capabilities for real-time autonomous systems.New IoT distributed architectures, combined with system-level architectures for edge/fog computing, are evolving IoT platforms, including AI and DLTs, with embedded intelligence into the hyperconnectivity infrastructure.The next generation of IoT/IIoT technologies are highly transformational, enabling innovation at scale, and autonomous decision-making in various application domains such as healthcare, smart homes, smart buildings, smart cities, energy, agriculture, transportation and autonomous vehicles, the military, logistics and supply chain, retail and wholesale, manufacturing, mining and oil and gas

    Technologies and Applications for Big Data Value

    Get PDF
    This open access book explores cutting-edge solutions and best practices for big data and data-driven AI applications for the data-driven economy. It provides the reader with a basis for understanding how technical issues can be overcome to offer real-world solutions to major industrial areas. The book starts with an introductory chapter that provides an overview of the book by positioning the following chapters in terms of their contributions to technology frameworks which are key elements of the Big Data Value Public-Private Partnership and the upcoming Partnership on AI, Data and Robotics. The remainder of the book is then arranged in two parts. The first part “Technologies and Methods” contains horizontal contributions of technologies and methods that enable data value chains to be applied in any sector. The second part “Processes and Applications” details experience reports and lessons from using big data and data-driven approaches in processes and applications. Its chapters are co-authored with industry experts and cover domains including health, law, finance, retail, manufacturing, mobility, and smart cities. Contributions emanate from the Big Data Value Public-Private Partnership and the Big Data Value Association, which have acted as the European data community's nucleus to bring together businesses with leading researchers to harness the value of data to benefit society, business, science, and industry. The book is of interest to two primary audiences, first, undergraduate and postgraduate students and researchers in various fields, including big data, data science, data engineering, and machine learning and AI. Second, practitioners and industry experts engaged in data-driven systems, software design and deployment projects who are interested in employing these advanced methods to address real-world problems

    Weiterentwicklung analytischer Datenbanksysteme

    Get PDF
    This thesis contributes to the state of the art in analytical database systems. First, we identify and explore extensions to better support analytics on event streams. Second, we propose a novel polygon index to enable efficient geospatial data processing in main memory. Third, we contribute a new deep learning approach to cardinality estimation, which is the core problem in cost-based query optimization.Diese Arbeit trĂ€gt zum aktuellen Forschungsstand von analytischen Datenbanksystemen bei. Wir identifizieren und explorieren Erweiterungen um Analysen auf Eventströmen besser zu unterstĂŒtzen. Wir stellen eine neue Indexstruktur fĂŒr Polygone vor, die eine effiziente Verarbeitung von Geodaten im Hauptspeicher ermöglicht. Zudem prĂ€sentieren wir einen neuen Ansatz fĂŒr KardinalitĂ€tsschĂ€tzungen mittels maschinellen Lernens
    corecore