80,866 research outputs found

    Wireless Data Acquisition for Edge Learning: Data-Importance Aware Retransmission

    Full text link
    By deploying machine-learning algorithms at the network edge, edge learning can leverage the enormous real-time data generated by billions of mobile devices to train AI models, which enable intelligent mobile applications. In this emerging research area, one key direction is to efficiently utilize radio resources for wireless data acquisition to minimize the latency of executing a learning task at an edge server. Along this direction, we consider the specific problem of retransmission decision in each communication round to ensure both reliability and quantity of those training data for accelerating model convergence. To solve the problem, a new retransmission protocol called data-importance aware automatic-repeat-request (importance ARQ) is proposed. Unlike the classic ARQ focusing merely on reliability, importance ARQ selectively retransmits a data sample based on its uncertainty which helps learning and can be measured using the model under training. Underpinning the proposed protocol is a derived elegant communication-learning relation between two corresponding metrics, i.e., signal-to-noise ratio (SNR) and data uncertainty. This relation facilitates the design of a simple threshold based policy for importance ARQ. The policy is first derived based on the classic classifier model of support vector machine (SVM), where the uncertainty of a data sample is measured by its distance to the decision boundary. The policy is then extended to the more complex model of convolutional neural networks (CNN) where data uncertainty is measured by entropy. Extensive experiments have been conducted for both the SVM and CNN using real datasets with balanced and imbalanced distributions. Experimental results demonstrate that importance ARQ effectively copes with channel fading and noise in wireless data acquisition to achieve faster model convergence than the conventional channel-aware ARQ.Comment: This is an updated version: 1) extension to general classifiers; 2) consideration of imbalanced classification in the experiments. Submitted to IEEE Journal for possible publicatio

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Monitoring land use changes using geo-information : possibilities, methods and adapted techniques

    Get PDF
    Monitoring land use with geographical databases is widely used in decision-making. This report presents the possibilities, methods and adapted techniques using geo-information in monitoring land use changes. The municipality of Soest was chosen as study area and three national land use databases, viz. Top10Vector, CBS land use statistics and LGN, were used. The restrictions of geo-information for monitoring land use changes are indicated. New methods and adapted techniques improve the monitoring result considerably. Providers of geo-information, however, should coordinate on update frequencies, semantic content and spatial resolution to allow better possibilities of monitoring land use by combining data sets

    Robust Mission Design Through Evidence Theory and Multi-Agent Collaborative Search

    Full text link
    In this paper, the preliminary design of a space mission is approached introducing uncertainties on the design parameters and formulating the resulting reliable design problem as a multiobjective optimization problem. Uncertainties are modelled through evidence theory and the belief, or credibility, in the successful achievement of mission goals is maximised along with the reliability of constraint satisfaction. The multiobjective optimisation problem is solved through a novel algorithm based on the collaboration of a population of agents in search for the set of highly reliable solutions. Two typical problems in mission analysis are used to illustrate the proposed methodology

    Principles in Patterns (PiP) : Evaluation of Impact on Business Processes

    Get PDF
    The innovation and development work conducted under the auspices of the Principles in Patterns (PiP) project is intended to explore and develop new technology-supported approaches to curriculum design, approval and review. An integral component of this innovation is the use of business process analysis and process change techniques - and their instantiation within the C-CAP system (Class and Course Approval Pilot) - in order to improve the efficacy of curriculum approval processes. Improvements to approval process responsiveness and overall process efficacy can assist institutions in better reviewing or updating curriculum designs to enhance pedagogy. Such improvements also assume a greater significance in a globalised HE environment, in which institutions must adapt or create curricula quickly in order to better reflect rapidly changing academic contexts, as well as better responding to the demands of employment marketplaces and the expectations of professional bodies. This is increasingly an issue for disciplines within the sciences and engineering, where new skills or knowledge need to be rapidly embedded in curricula as a response to emerging technological or environmental developments. All of the aforementioned must also be achieved while simultaneously maintaining high standards of academic quality, thus adding a further layer of complexity to the way in which HE institutions engage in "responsive curriculum design" and approval. This strand of the PiP evaluation therefore entails an analysis of the business process techniques used by PiP, their efficacy, and the impact of process changes on the curriculum approval process, as instantiated by C-CAP. More generally the evaluation is a contribution towards a wider understanding of technology-supported process improvement initiatives within curriculum approval and their potential to render such processes more transparent, efficient and effective. Partly owing to limitations in the data required to facilitate comparative analyses, this evaluation adopts a mixed approach, making use of qualitative and quantitative methods as well as theoretical techniques. These approaches combined enable a comparative evaluation of the curriculum approval process under the "new state" (i.e. using C-CAP) and under the "previous state". This report summarises the methodology used to enable comparative evaluation and presents an analysis and discussion of the results. As the report will explain, the impact of C-CAP and its ability to support improvements in process and document management has resulted in the resolution of numerous process failings. C-CAP has also demonstrated potential for improvements in approval process cycle time, process reliability, process visibility, process automation, process parallelism and a reduction in transition delays within the approval process, thus contributing to considerable process efficiencies; although it is acknowledged that enhancements and redesign may be required to take advantage of C-CAP's potential. Other aspects pertaining to C-CAP's impact on process change, improvements to document management and the curation of curriculum designs will also be discussed

    Preliminary space mission design under uncertainty

    Get PDF
    This paper proposes a way to model uncertainties and to introduce them explicitly in the design process of a preliminary space mission. Traditionally, a system margin approach is used in order to take the min to account. In this paper, Evidence Theory is proposed to crystallise the inherent uncertainties. The design process is then formulated as an optimisation under uncertainties(OUU). Three techniques are proposed to solve the OUU problem: (a) an evolutionary multi-objective approach, (b) a step technique consisting of maximising the belief for different levels of performance, and (c) a clustering method that firstly identifies feasible regions.The three methods are applied to the Bepi Colombo mission and their effectiveness at solving the OUU problem are compared

    CoFeD: A visualisation framework for comparative quality evaluation

    Get PDF
    Evaluation for the purpose of selection can be a challenging task particularly when there is a plethora of choices available. Short-listing, comparisons and eventual choice(s) can be aided by visualisation techniques. In this paper we use Feature Analysis, Tabular and Tree Representations and Composite Features Diagrams (CFDs) for profiling user requirements and for top-down profiling and evaluation of items (methods, tools, techniques, processes and so on) under evaluation. The resulting framework CoFeD enables efficient visual comparison and initial short-listing. The second phase uses bottom-up quantitative evaluation which aids the elimination of the weakest items and hence the effective selection of the most appropriate item. The versatility of the framework is illustrated by a case study comparison and evaluation of two agile methodologies. The paper concludes with limitations and indications of further work
    corecore