401 research outputs found

    2022 Review of Data-Driven Plasma Science

    Get PDF
    Data-driven science and technology offer transformative tools and methods to science. This review article highlights the latest development and progress in the interdisciplinary field of data-driven plasma science (DDPS), i.e., plasma science whose progress is driven strongly by data and data analyses. Plasma is considered to be the most ubiquitous form of observable matter in the universe. Data associated with plasmas can, therefore, cover extremely large spatial and temporal scales, and often provide essential information for other scientific disciplines. Thanks to the latest technological developments, plasma experiments, observations, and computation now produce a large amount of data that can no longer be analyzed or interpreted manually. This trend now necessitates a highly sophisticated use of high-performance computers for data analyses, making artificial intelligence and machine learning vital components of DDPS. This article contains seven primary sections, in addition to the introduction and summary. Following an overview of fundamental data-driven science, five other sections cover widely studied topics of plasma science and technologies, i.e., basic plasma physics and laboratory experiments, magnetic confinement fusion, inertial confinement fusion and high-energy-density physics, space and astronomical plasmas, and plasma technologies for industrial and other applications. The final section before the summary discusses plasma-related databases that could significantly contribute to DDPS. Each primary section starts with a brief introduction to the topic, discusses the state-of-the-art developments in the use of data and/or data-scientific approaches, and presents the summary and outlook. Despite the recent impressive signs of progress, the DDPS is still in its infancy. This article attempts to offer a broad perspective on the development of this field and identify where further innovations are required

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Data bases and data base systems related to NASA's Aerospace Program: A bibliography with indexes

    Get PDF
    This bibliography lists 641 reports, articles, and other documents introduced into the NASA scientific and technical information system during the period January 1, 1981 through June 30, 1982. The directory was compiled to assist in the location of numerical and factual data bases and data base handling and management systems

    FESTA. Handbook version 2

    Get PDF
    In Japan and in the United States Field Operational Tests (FOTs) have been introduced as an evaluation method for driver support systems and other functions several years ago with the aim of proving that such systems can deliver real‐world benefits. In Europe too, FOTs have been conducted at a national or regional level, particularly on speed support systems and lane departure warning systems. These FOTs have proven to be highly valuable. Recently FOTs have been identified as an important means of verifying the real‐world impacts of new systems at a European level and in particular to verify that European R&D has the potential to deliver identifiable benefits. This Handbook is the result of a joint effort of several research institutes, OEMs and other stakeholders from across Europe to prepare a common methodology for European FOTs. It is also highly relevant, and it is hoped useful, for FOTs conducted at a regional or national level within Europe as well as outside Europe

    Intelligent watermarking of long streams of document images

    Get PDF
    Digital watermarking has numerous applications in the imaging domain, including (but not limited to) fingerprinting, authentication, tampering detection. Because of the trade-off between watermark robustness and image quality, the heuristic parameters associated with digital watermarking systems need to be optimized. A common strategy to tackle this optimization problem formulation of digital watermarking, known as intelligent watermarking (IW), is to employ evolutionary computing (EC) to optimize these parameters for each image, with a computational cost that is infeasible for practical applications. However, in industrial applications involving streams of document images, one can expect instances of problems to reappear over time. Therefore, computational cost can be saved by preserving the knowledge of previous optimization problems in a separate archive (memory) and employing that memory to speedup or even replace optimization for future similar problems. That is the basic principle behind the research presented in this thesis. Although similarity in the image space can lead to similarity in the problem space, there is no guarantee of that and for this reason, knowledge about the image space should not be employed whatsoever. Therefore, in this research, strategies to appropriately represent, compare, store and sample from problem instances are investigated. The objective behind these strategies is to allow for a comprehensive representation of a stream of optimization problems in a way to avoid re-optimization whenever a previously seen problem provides solutions as good as those that would be obtained by reoptimization, but at a fraction of its cost. Another objective is to provide IW systems with a predictive capability which allows replacing costly fitness evaluations with cheaper regression models whenever re-optimization cannot be avoided. To this end, IW of streams of document images is first formulated as the problem of optimizing a stream of recurring problems and a Dynamic Particle Swarm Optimization (DPSO) technique is proposed to tackle this problem. This technique is based on a two-tiered memory of static solutions. Memory solutions are re-evaluated for every new image and then, the re-evaluated fitness distribution is compared with stored fitness distribution as a mean of measuring the similarity between both problem instances (change detection). In simulations involving homogeneous streams of bi-tonal document images, the proposed approach resulted in a decrease of 95% in computational burden with little impact in watermarking performace. Optimization cost was severely decreased by replacing re-optimizations with recall to previously seen solutions. After that, the problem of representing the stream of optimization problems in a compact manner is addressed. With that, new optimization concepts can be incorporated into previously learned concepts in an incremental fashion. The proposed strategy to tackle this problem is based on Gaussian Mixture Models (GMM) representation, trained with parameter and fitness data of all intermediate (candidate) solutions of a given problem instance. GMM sampling replaces selection of individual memory solutions during change detection. Simulation results demonstrate that such memory of GMMs is more adaptive and can thus, better tackle the optimization of embedding parameters for heterogeneous streams of document images when compared to the approach based on memory of static solutions. Finally, the knowledge provided by the memory of GMMs is employed as a manner of decreasing the computational cost of re-optimization. To this end, GMM is employed in regression mode during re-optimization, replacing part of the costly fitness evaluations in a strategy known as surrogate-based optimization. Optimization is split in two levels, where the first one relies primarily on regression while the second one relies primarily on exact fitness values and provide a safeguard to the whole system. Simulation results demonstrate that the use of surrogates allows for better adaptation in situations involving significant variations in problem representation as when the set of attacks employed in the fitness function changes. In general lines, the intelligent watermarking system proposed in this thesis is well adapted for the optimization of streams of recurring optimization problems. The quality of the resulting solutions for both, homogeneous and heterogeneous image streams is comparable to that obtained through full optimization but for a fraction of its computational cost. More specifically, the number of fitness evaluations is 97% smaller than that of full optimization for homogeneous streams and 95% for highly heterogeneous streams of document images. The proposed method is general and can be easily adapted to other applications involving streams of recurring problems

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    New product development portfolio management : a systematic literature review

    Get PDF
    Product innovation is a key driver of any company’s growth. The biggest challenge in managing product innovation is in determining the most promising new product development (NPD) projects from the many ideas generated, known as portfolio management. In practice, NPD portfolio management still bears some problematic issues, including focusing mainly on portfolio selection rather than managing the entire process, the vague links between the process and business strategy, and a lack of formal process. Therefore, a study that looks at NPD portfolio management through different perspectives is required. NPD portfolio management deals with dynamic decision-making processes, involving not only selection decisions, but also decisions to delay, continue or even terminate projects. To understand this integrative process, a systematic literature review that explored four knowledge domains, i.e., NPD portfolio management, decision- making, strategy and organisational routines, was carried out. It involved 40 articles published from 1981-2012. The review focused on revealing how decision-making processes in NPD portfolio management are conducted and how they relate to the strategy process and organisational routines. The key findings show that decisions in the NPD portfolio management process are made through interaction between cognitive and political factors, overlooking the organisational factors in the process. Furthermore, the extant literature does not explicitly explain how to link the NPD portfolio management process to the strategy process. Also, the findings indicate that the concept of organisational routines had not been used when investigating NPD portfolio management. These are the research gaps that led to the three research questions: 1) How are organisational factors involved with the cognitive and political factors in the decision-making processes in NPD portfolio management?; 2) How do the decision-making processes in NPD portfolio management link to the business strategy?; and 3) To what extent are organisational routines related to the decision-making processes in NPD portfolio management

    Advanced Location-Based Technologies and Services

    Get PDF
    Since the publication of the first edition in 2004, advances in mobile devices, positioning sensors, WiFi fingerprinting, and wireless communications, among others, have paved the way for developing new and advanced location-based services (LBSs). This second edition provides up-to-date information on LBSs, including WiFi fingerprinting, mobile computing, geospatial clouds, geospatial data mining, location privacy, and location-based social networking. It also includes new chapters on application areas such as LBSs for public health, indoor navigation, and advertising. In addition, the chapter on remote sensing has been revised to address advancements

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation
    • 

    corecore