7,137 research outputs found
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
An empirical methodology for developing stockmarket trading systems using artificial neural networks
Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey
Growing progress in sensor technology has constantly expanded the number and
range of low-cost, small, and portable sensors on the market, increasing the
number and type of physical phenomena that can be measured with wirelessly
connected sensors. Large-scale deployments of wireless sensor networks (WSN)
involving hundreds or thousands of devices and limited budgets often constrain
the choice of sensing hardware, which generally has reduced accuracy,
precision, and reliability. Therefore, it is challenging to achieve good data
quality and maintain error-free measurements during the whole system lifetime.
Self-calibration or recalibration in ad hoc sensor networks to preserve data
quality is essential, yet challenging, for several reasons, such as the
existence of random noise and the absence of suitable general models.
Calibration performed in the field, without accurate and controlled
instrumentation, is said to be in an uncontrolled environment. This paper
provides current and fundamental self-calibration approaches and models for
wireless sensor networks in uncontrolled environments
Open source R for applying machine learning to RPAS remote sensing images
The increase in the number of remote sensing platforms, ranging from satellites to close-range Remotely Piloted Aircraft System (RPAS), is leading to a growing demand for new image processing and classification tools. This article presents a comparison of the Random Forest (RF) and Support Vector Machine (SVM) machine-learning algorithms for extracting land-use classes in RPAS-derived orthomosaic using open source R packages.
The camera used in this work captures the reflectance of the Red, Blue, Green and Near Infrared channels of a target. The full dataset is therefore a 4-channel raster image. The classification performance of the two methods is tested at varying sizes of training sets. The SVM and RF are evaluated using Kappa index, classification accuracy and classification error as accuracy metrics. The training sets are randomly obtained as subset of 2 to 20% of the total number of raster cells, with stratified sampling according to the land-use classes. Ten runs are done for each training set to calculate the variance in results. The control dataset consists of an independent classification obtained by photointerpretation. The validation is carried out(i) using the K-Fold cross validation, (ii) using the pixels from the validation test set, and (iii) using the pixels from the full test set.
Validation with K-fold and with the validation dataset show SVM give better results, but RF prove to be more performing when training size is larger. Classification error and classification accuracy follow the trend of Kappa index
Benchmarking the Operation of Quantum Heuristics and Ising Machines: Scoring Parameter Setting Strategies on Optimization Applications
We discuss guidelines for evaluating the performance of parameterized
stochastic solvers for optimization problems, with particular attention to
systems that employ novel hardware, such as digital quantum processors running
variational algorithms, analog processors performing quantum annealing, or
coherent Ising Machines. We illustrate through an example a benchmarking
procedure grounded in the statistical analysis of the expectation of a given
performance metric measured in a test environment. In particular, we discuss
the necessity and cost of setting parameters that affect the algorithm's
performance. The optimal value of these parameters could vary significantly
between instances of the same target problem. We present an open-source
software package that facilitates the design, evaluation, and visualization of
practical parameter tuning strategies for complex use of the heterogeneous
components of the solver. We examine in detail an example using parallel
tempering and a simulator of a photonic Coherent Ising Machine computing and
display the scoring of an illustrative baseline family of parameter-setting
strategies that feature an exploration-exploitation trade-off.Comment: 13 pages, 6 figure
- …