2,560,076 research outputs found
The ethics of uncertainty for data subjects
Modern health data practices come with many practical uncertainties. In this paper, I argue that data subjects’ trust in the institutions and organizations that control their data, and their ability to know their own moral obligations in relation to their data, are undermined by significant uncertainties regarding the what, how, and who of mass data collection and analysis. I conclude by considering how proposals for managing situations of high uncertainty might be applied to this problem. These emphasize increasing organizational flexibility, knowledge, and capacity, and reducing hazard
Data granulation by the principles of uncertainty
Researches in granular modeling produced a variety of mathematical models,
such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets,
which are all suitable to characterize the so-called information granules.
Modeling of the input data uncertainty is recognized as a crucial aspect in
information granulation. Moreover, the uncertainty is a well-studied concept in
many mathematical settings, such as those of probability theory, fuzzy set
theory, and possibility theory. This fact suggests that an appropriate
quantification of the uncertainty expressed by the information granule model
could be used to define an invariant property, to be exploited in practical
situations of information granulation. In this perspective, a procedure of
information granulation is effective if the uncertainty conveyed by the
synthesized information granule is in a monotonically increasing relation with
the uncertainty of the input data. In this paper, we present a data granulation
framework that elaborates over the principles of uncertainty introduced by
Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is
possible to apply such principles regardless of the input data type and the
specific mathematical setting adopted for the information granules. The
proposed framework is conceived (i) to offer a guideline for the synthesis of
information granules and (ii) to build a groundwork to compare and
quantitatively judge over different data granulation procedures. To provide a
suitable case study, we introduce a new data granulation technique based on the
minimum sum of distances, which is designed to generate type-2 fuzzy sets. We
analyze the procedure by performing different experiments on two distinct data
types: feature vectors and labeled graphs. Results show that the uncertainty of
the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference
Uncertainty Analysis for Data-Driven Chance-Constrained Optimization
In this contribution our developed framework for data-driven chance-constrained optimization is extended with an uncertainty analysis module. The module quantifies uncertainty in output variables of rigorous simulations. It chooses the most accurate parametric continuous probability distribution model, minimizing deviation between model and data. A constraint is added to favour less complex models with a minimal required quality regarding the fit. The bases of the module are over 100 probability distribution models provided in the Scipy package in Python, a rigorous case-study is conducted selecting the four most relevant models for the application at hand. The applicability and precision of the uncertainty analyser module is investigated for an impact factor calculation in life cycle impact assessment to quantify the uncertainty in the results. Furthermore, the extended framework is verified with data from a first principle process model of a chloralkali plant, demonstrating the increased precision of the uncertainty description of the output variables, resulting in 25% increase in accuracy in the chance-constraint calculation.BMWi, 0350013A, ChemEFlex - Umsetzbarkeitsanalyse zur Lastflexibilisierung elektrochemischer Verfahren in der Industrie; Teilvorhaben: Modellierung der Chlor-Alkali-Elektrolyse sowie anderer Prozesse und deren Bewertung hinsichtlich Wirtschaftlichkeit und möglicher HemmnisseDFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli
Uncertainty-dependent data collection in vehicular sensor networks
Vehicular sensor networks (VSNs) are built on top of vehicular ad-hoc
networks (VANETs) by equipping vehicles with sensing devices. These new
technologies create a huge opportunity to extend the sensing capabilities of
the existing road traffic control systems and improve their performance.
Efficient utilisation of wireless communication channel is one of the basic
issues in the vehicular networks development. This paper presents and evaluates
data collection algorithms that use uncertainty estimates to reduce data
transmission in a VSN-based road traffic control system.Comment: 10 pages, 6 figure
Data Uncertainty in Real Estate Forecasting
The rapid expansion of the TMT sector in the late 1990s and more recent growing regulatory and corporate focus on business continuity and security have raised the profile of data centres. Data centres offer a unique blend of occupational, physical and technological characteristics compared to conventional real estate assets. Limited trading and heterogeneity of data centres also causes higher levels of appraisal uncertainty. In practice, the application of conventional discounted cash flow approaches requires information about a wide range of inputs that is difficult to derive from limited market signals or estimate analytically. This paper outlines an approach that uses pricing signals from similar traded cash flows is proposed. Based upon ‘the law of one price’, the method draws upon the premise that two identical future cash flows must have the same value now. Given the difficulties of estimating exit values, an alternative is that the expected cash flows of data centre are analysed over the life cycle of the building, with corporate bond yields used to provide a proxy for the appropriate discount rates for lease income. Since liabilities are quite diverse, a number of proxies are suggested as discount and capitalisation rates including indexed-linked, fixed interest and zero-coupon bonds. Although there are rarely assets that have identical cash flows and some approximation is necessary, the level of appraiser subjectivity is dramatically reduced.
FSS++ Workshop Report: Handling Uncertainty for Data Quality Management
This report describes the results of the eSCF Awareness Workshop on Handling
Uncertainty for Data Quality Management - Challenges from Transport and Supply
Chain Management that was held on June 5, 2018 in Heeze, The Netherlands. The
goal of this workshop was to create and enhance awareness into data quality
management issues that are encountered in practice, for business organizations
that aim to integrate a data-analytical mind set into their operations
Uncertainty Estimates for Theoretical Atomic and Molecular Data
Sources of uncertainty are reviewed for calculated atomic and molecular data
that are important for plasma modeling: atomic and molecular structure and
cross sections for electron-atom, electron-molecule, and heavy particle
collisions. We concentrate on model uncertainties due to approximations to the
fundamental many-body quantum mechanical equations and we aim to provide
guidelines to estimate uncertainties as a routine part of computations of data
for structure and scattering.Comment: 65 pages, 18 Figures, 3 Tables. J. Phys. D: Appl. Phys. Final
accepted versio
Identification of individual demands from market data under uncertainty
We show that, even under incomplete markets, the equilibrium manifold identifies individual demands everywhere in their domains. Under partial observation of the manifold, we determine maximal subsets of the domains on which identification holds. For this, we assume conditions of smoothness, interiority and regularity. It is crucial that there be date-zero consumption. As a by-product, we develop some duality theory under incomplete markets
Recommended from our members
Estimating the uncertainty of areal precipitation using data assimilation
We present a method to estimate spatially and temporally variable uncertainty of areal precipitation data. The aim of the method is to merge measurements from different sources, remote sensing and in situ, into a combined precipitation product and to provide an associated dynamic uncertainty estimate. This estimate should provide an accurate representation of uncertainty both in time and space, an adjustment to additional observations merged into the product through data assimilation, and flow dependency. Such a detailed uncertainty description is important for example to generate precipitation ensembles for probabilistic hydrological modelling or to specify accurate error covariances when using precipitation observations for data assimilation into numerical weather prediction models. The presented method uses the Local Ensemble Transform Kalman Filter and an ensemble nowcasting model. The model provides information about the precipitation displacement over time and is continuously updated by assimilation of observations. In this way, the precipitation product and its uncertainty estimate provided by the nowcasting ensemble evolve consistently in time and become flow-dependent. The method is evaluated in a proof of concept study focusing on weather radar data of four precipitation events. The study demonstrates that the dynamic areal uncertainty estimate outperforms a constant benchmark uncertainty value in all cases for one of the evaluated scores, and in half the number of cases for the other score. Thus, the flow dependency introduced by the coupling of data assimilation and nowcasting enables a more accurate spatial and temporal distribution of uncertainty. The mixed results achieved in the second score point out the importance of a good probabilistic nowcasting scheme for the performance of the method
- …
