125,255 research outputs found
A prescriptive approach to qualify and quantify customer value for value-based requirements engineering
Recently, customer-based product development is becoming a popular paradigm. Customer expectations and needs can be identified and transformed into requirements for product design with the help of various methods and tools. However, in many cases, these models fail to focus on the perceived value that is crucial when customers make the decision of purchasing a product. In this paper, a prescriptive approach to support value-based requirements engineering (RE) is proposed, describing the foundations, procedures and initial applications in the context of RE for commercial aircraft. An integrated set of techniques, such as means-ends analysis, part-whole analysis and multi-attribute utility theory is introduced in order to understand customer values in depth and width. Technically, this enables identifying the implicit value, structuring logically collected statements of customer expectations and performing value modelling and simulation. Additionally, it helps to put in place a system to measure customer satisfaction that is derived from the proposed approach. The approach offers significant potential to develop effective value creation strategies for the development of new product
On-the-fly Data Assessment for High Throughput X-ray Diffraction Measurement
Investment in brighter sources and larger and faster detectors has
accelerated the speed of data acquisition at national user facilities. The
accelerated data acquisition offers many opportunities for discovery of new
materials, but it also presents a daunting challenge. The rate of data
acquisition far exceeds the current speed of data quality assessment, resulting
in less than optimal data and data coverage, which in extreme cases forces
recollection of data. Herein, we show how this challenge can be addressed
through development of an approach that makes routine data assessment automatic
and instantaneous. Through extracting and visualizing customized attributes in
real time, data quality and coverage, as well as other scientifically relevant
information contained in large datasets is highlighted. Deployment of such an
approach not only improves the quality of data but also helps optimize usage of
expensive characterization resources by prioritizing measurements of highest
scientific impact. We anticipate our approach to become a starting point for a
sophisticated decision-tree that optimizes data quality and maximizes
scientific content in real time through automation. With these efforts to
integrate more automation in data collection and analysis, we can truly take
advantage of the accelerating speed of data acquisition
Recommended from our members
Data sets and data quality in software engineering
OBJECTIVE - to assess the extent and types of techniques used to manage quality within software engineering data sets. We consider this a particularly interesting question in the context of initiatives to promote sharing and secondary analysis of data sets.
METHOD - we perform a systematic review of available empirical software engineering studies.
RESULTS - only 23 out of the many hundreds of studies assessed, explicitly considered data quality.
CONCLUSIONS - first, the community needs to consider the quality and appropriateness of the data set being utilised; not all data sets are equal. Second, we need more research into means of identifying, and ideally repairing, noisy cases. Third, it should become routine to use sensitivity analysis to assess conclusion stability with respect to the assumptions that must be made concerning noise levels
Outlier detection techniques for wireless sensor networks: A survey
In the field of wireless sensor networks, those measurements that significantly deviate from the normal pattern of sensed data are considered as outliers. The potential sources of outliers include noise and errors, events, and malicious attacks on the network. Traditional outlier detection techniques are not directly applicable to wireless sensor networks due to the nature of sensor data and specific requirements and limitations of the wireless sensor networks. This survey provides a comprehensive overview of existing outlier detection techniques specifically developed for the wireless sensor networks. Additionally, it presents a technique-based taxonomy and a comparative table to be used as a guideline to select a technique suitable for the application at hand based on characteristics such as data type, outlier type, outlier identity, and outlier degree
Marginal Release Under Local Differential Privacy
Many analysis and machine learning tasks require the availability of marginal
statistics on multidimensional datasets while providing strong privacy
guarantees for the data subjects. Applications for these statistics range from
finding correlations in the data to fitting sophisticated prediction models. In
this paper, we provide a set of algorithms for materializing marginal
statistics under the strong model of local differential privacy. We prove the
first tight theoretical bounds on the accuracy of marginals compiled under each
approach, perform empirical evaluation to confirm these bounds, and evaluate
them for tasks such as modeling and correlation testing. Our results show that
releasing information based on (local) Fourier transformations of the input is
preferable to alternatives based directly on (local) marginals
Identifying Mislabeled Training Data
This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data
- âŠ