3,816 research outputs found

    Generalised Decision Level Ensemble Method for Classifying Multi-media Data

    Get PDF
    In recent decades, multimedia data have been commonly generated and used in various domains, such as in healthcare and social media due to their ability of capturing rich information. But as they are unstructured and separated, how to fuse and integrate multimedia datasets and then learn from them eectively have been a main challenge to machine learning. We present a novel generalised decision level ensemble method (GDLEM) that combines the multimedia datasets at decision level. After extracting features from each of multimedia datasets separately, the method trains models independently on each media dataset and then employs a generalised selection function to choose the appropriate models to construct a heterogeneous ensemble. The selection function is dened as a weighted combination of two criteria: the accuracy of individual models and the diversity among the models. The framework is tested on multimedia data and compared with other heterogeneous ensembles. The results show that the GDLEM is more exible and eective

    Feature Level Ensemble Method for Classifying Multi-Media Data

    Get PDF
    Multimedia data consists of several different types of data, such as numbers, text, images, audio etc. and they usually need to be fused or integrated before analysis. This study investigates a feature-level aggregation approach to combine multimedia datasets for building heterogeneous ensembles for classification. It firstly aggregates multimedia datasets at feature level to form a normalised big dataset, then uses some parts of it to generate classifiers with different learning algorithms. Finally, it applies three rules to select appropriate classifiers based on their accuracy and/or diversity to build heterogeneous ensembles. The method is tested on a multimedia dataset and the results show that the heterogeneous ensembles outperform the individual classifiers as well as homogeneous ensembles. However, it should be noted that, it is possible in some cases that the combined dataset does not produce better results than using single media data

    Decision level ensemble method for classifying multi-media data

    Get PDF
    In the digital era, the data, for a given analytical task, can be collected in different formats, such as text, images and audio etc. The data with multiple formats are called multimedia data. Integrating and fusing multimedia datasets has become a challenging task in machine learning and data mining. In this paper, we present heterogeneous ensemble method that combines multi-media datasets at the decision level. Our method consists of several components, including extracting the features from multimedia datasets that are not represented by features, modelling independently on each of multimedia datasets, selecting models based on their accuracy and diversity and building the ensemble at the decision level. Hence our method is called decision level ensemble method (DLEM). The method is tested on multimedia data and compared with other heterogeneous ensemble based methods. The results show that the DLEM outperformed these methods significantly

    Profiling relational data: a survey

    Get PDF
    Profiling data to determine metadata about a given dataset is an important and frequent activity of any IT professional and researcher and is necessary for various use-cases. It encompasses a vast array of methods to examine datasets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute involve multiple columns, namely correlations, unique column combinations, functional dependencies, and inclusion dependencies. Further techniques detect conditional properties of the dataset at hand. This survey provides a classification of data profiling tasks and comprehensively reviews the state of the art for each class. In addition, we review data profiling tools and systems from research and industry. We conclude with an outlook on the future of data profiling beyond traditional profiling tasks and beyond relational databases

    GTTC Future of Ground Testing Meta-Analysis of 20 Documents

    Get PDF
    National research, development, test, and evaluation ground testing capabilities in the United States are at risk. There is a lack of vision and consensus on what is and will be needed, contributing to a significant threat that ground test capabilities may not be able to meet the national security and industrial needs of the future. To support future decisions, the AIAA Ground Testing Technical Committees (GTTC) Future of Ground Test (FoGT) Working Group selected and reviewed 20 seminal documents related to the application and direction of ground testing. Each document was reviewed, with the content main points collected and organized into sections in the form of a gap analysis current state, future state, major challenges/gaps, and recommendations. This paper includes key findings and selected commentary by an editing team

    Innovation in manufacturing through digital technologies and applications: Thoughts and Reflections on Industry 4.0

    Get PDF
    The rapid pace of developments in digital technologies offers many opportunities to increase the efficiency, flexibility and sophistication of manufacturing processes; including the potential for easier customisation, lower volumes and rapid changeover of products within the same manufacturing cell or line. A number of initiatives on this theme have been proposed around the world to support national industries under names such as Industry 4.0 (Industrie 4.0 in Germany, Made-in-China in China and Made Smarter in the UK). This book presents an overview of the state of art and upcoming developments in digital technologies pertaining to manufacturing. The starting point is an introduction on Industry 4.0 and its potential for enhancing the manufacturing process. Later on moving to the design of smart (that is digitally driven) business processes which are going to rely on sensing of all relevant parameters, gathering, storing and processing the data from these sensors, using computing power and intelligence at the most appropriate points in the digital workflow including application of edge computing and parallel processing. A key component of this workflow is the application of Artificial Intelligence and particularly techniques in Machine Learning to derive actionable information from this data; be it real-time automated responses such as actuating transducers or informing human operators to follow specified standard operating procedures or providing management data for operational and strategic planning. Further consideration also needs to be given to the properties and behaviours of particular machines that are controlled and materials that are transformed during the manufacturing process and this is sometimes referred to as Operational Technology (OT) as opposed to IT. The digital capture of these properties and behaviours can then be used to define so-called Cyber Physical Systems. Given the power of these digital technologies it is of paramount importance that they operate safely and are not vulnerable to malicious interference. Industry 4.0 brings unprecedented cybersecurity challenges to manufacturing and the overall industrial sector and the case is made here that new codes of practice are needed for the combined Information Technology and Operational Technology worlds, but with a framework that should be native to Industry 4.0. Current computing technologies are also able to go in other directions than supporting the digital ‘sense to action’ process described above. One of these is to use digital technologies to enhance the ability of the human operators who are still essential within the manufacturing process. One such technology, that has recently become accessible for widespread adoption, is Augmented Reality, providing operators with real-time additional information in situ with the machines that they interact with in their workspace in a hands-free mode. Finally, two linked chapters discuss the specific application of digital technologies to High Pressure Die Casting (HDPC) of Magnesium components. Optimizing the HPDC process is a key task for increasing productivity and reducing defective parts and the first chapter provides an overview of the HPDC process with attention to the most common defects and their sources. It does this by first looking at real-time process control mechanisms, understanding the various process variables and assessing their impact on the end product quality. This understanding drives the choice of sensing methods and the associated smart digital workflow to allow real-time control and mitigation of variation in the identified variables. Also, data from this workflow can be captured and used for the design of optimised dies and associated processes

    Development of a Prototype Model-Form Uncertainty Knowledge Base

    Get PDF
    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained
    corecore