18,102 research outputs found
Review of the mathematical foundations of data fusion techniques in surface metrology
The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed
Potentials and Limits of Bayesian Networks to Deal with Uncertainty in the Assessment of Climate Change Adaptation Policies
Bayesian networks (BNs) have been increasingly applied to support management and decision-making processes under conditions of environmental variability and uncertainty, providing logical and holistic reasoning in complex systems since they succinctly and effectively translate causal assertions between variables into patterns of probabilistic dependence. Through a theoretical assessment of the features and the statistical rationale of BNs, and a review of specific applications to ecological modelling, natural resource management, and climate change policy issues, the present paper analyses the effectiveness of the BN model as a synthesis framework, which would allow the user to manage the uncertainty characterising the definition and implementation of climate change adaptation policies. The review will let emerge the potentials of the model to characterise, incorporate and communicate the uncertainty, with the aim to provide an efficient support to an informed and transparent decision making process. The possible drawbacks arising from the implementation of BNs are also analysed, providing potential solutions to overcome them.Adaptation to Climate Change, Bayesian Network, Uncertainty
Why (and How) Networks Should Run Themselves
The proliferation of networked devices, systems, and applications that we
depend on every day makes managing networks more important than ever. The
increasing security, availability, and performance demands of these
applications suggest that these increasingly difficult network management
problems be solved in real time, across a complex web of interacting protocols
and systems. Alas, just as the importance of network management has increased,
the network has grown so complex that it is seemingly unmanageable. In this new
era, network management requires a fundamentally new approach. Instead of
optimizations based on closed-form analysis of individual protocols, network
operators need data-driven, machine-learning-based models of end-to-end and
application performance based on high-level policy goals and a holistic view of
the underlying components. Instead of anomaly detection algorithms that operate
on offline analysis of network traces, operators need classification and
detection algorithms that can make real-time, closed-loop decisions. Networks
should learn to drive themselves. This paper explores this concept, discussing
how we might attain this ambitious goal by more closely coupling measurement
with real-time control and by relying on learning for inference and prediction
about a networked application or system, as opposed to closed-form analysis of
individual protocols
Knowledge-based systems and geological survey
This personal and pragmatic review of the philosophy underpinning methods of geological surveying suggests that important influences of information technology have yet to make their impact. Early approaches took existing systems as metaphors, retaining the separation of maps, map explanations and information archives, organised around map sheets of fixed boundaries, scale and content. But system design should look ahead: a computer-based knowledge system for the same purpose can be built around hierarchies of spatial objects and their relationships, with maps as one means of visualisation, and information types linked as hypermedia and integrated in mark-up languages. The system framework and ontology, derived from the general geoscience model, could support consistent representation of the underlying concepts and maintain reference information on object classes and their behaviour. Models of processes and historical configurations could clarify the reasoning at any level of object detail and introduce new concepts such as complex systems. The up-to-date interpretation might centre on spatial models, constructed with explicit geological reasoning and evaluation of uncertainties. Assuming (at a future time) full computer support, the field survey results could be collected in real time as a multimedia stream, hyperlinked to and interacting with the other parts of the system as appropriate. Throughout, the knowledge is seen as human knowledge, with interactive computer support for recording and storing the information and processing it by such means as interpolating, correlating, browsing, selecting, retrieving, manipulating, calculating, analysing, generalising, filtering, visualising and delivering the results. Responsibilities may have to be reconsidered for various aspects of the system, such as: field surveying; spatial models and interpretation; geological processes, past configurations and reasoning; standard setting, system framework and ontology maintenance; training; storage, preservation, and dissemination of digital records
What May Visualization Processes Optimize?
In this paper, we present an abstract model of visualization and inference
processes and describe an information-theoretic measure for optimizing such
processes. In order to obtain such an abstraction, we first examined six
classes of workflows in data analysis and visualization, and identified four
levels of typical visualization components, namely disseminative,
observational, analytical and model-developmental visualization. We noticed a
common phenomenon at different levels of visualization, that is, the
transformation of data spaces (referred to as alphabets) usually corresponds to
the reduction of maximal entropy along a workflow. Based on this observation,
we establish an information-theoretic measure of cost-benefit ratio that may be
used as a cost function for optimizing a data visualization process. To
demonstrate the validity of this measure, we examined a number of successful
visualization processes in the literature, and showed that the
information-theoretic measure can mathematically explain the advantages of such
processes over possible alternatives.Comment: 10 page
A Saliency-based Clustering Framework for Identifying Aberrant Predictions
In machine learning, classification tasks serve as the cornerstone of a wide
range of real-world applications. Reliable, trustworthy classification is
particularly intricate in biomedical settings, where the ground truth is often
inherently uncertain and relies on high degrees of human expertise for
labeling. Traditional metrics such as precision and recall, while valuable, are
insufficient for capturing the nuances of these ambiguous scenarios. Here we
introduce the concept of aberrant predictions, emphasizing that the nature of
classification errors is as critical as their frequency. We propose a novel,
efficient training methodology aimed at both reducing the misclassification
rate and discerning aberrant predictions. Our framework demonstrates a
substantial improvement in model performance, achieving a 20\% increase in
precision. We apply this methodology to the less-explored domain of veterinary
radiology, where the stakes are high but have not been as extensively studied
compared to human medicine. By focusing on the identification and mitigation of
aberrant predictions, we enhance the utility and trustworthiness of machine
learning classifiers in high-stakes, real-world scenarios, including new
applications in the veterinary world
Data-driven nonparametric Li-ion battery ageing model aiming at learning from real operation data – Part A : storage operation
Conventional Li-ion battery ageing models, such as electrochemical, semi-empirical and empirical models, require a significant amount of time and experimental resources to provide accurate predictions under realistic operating conditions. At the same time, there is significant interest from industry in the introduction of new data collection telemetry technology. This implies the forthcoming availability of a significant amount of real-world battery operation data. In this context, the development of ageing models able to learn from in-field battery operation data is an interesting solution to mitigate the need for exhaustive laboratory testing
Recommended from our members
propnet: A Knowledge Graph for Materials Science
Discovering the ideal material for a new application involves determining its numerous properties, such as electronic, mechanical, or thermodynamic, to match those of its desired application. The rise of high-throughput computation has meant that large databases of material properties are now accessible to scientists. However, these databases contain far more information than might appear at first glance, since many relationships exist in the materials science literature to derive, or at least approximate, additional properties. propnet is a new computational framework designed to help scientists to automatically calculate additional information from their datasets. It does this by constructing a network graph of relationships between different materials properties and traversing this graph. Initially, propnet contains a catalog of over 100 property relationships but the hope is for this to expand significantly in the future, and contributions from the community are welcomed
- …