163,763 research outputs found
Service validity and service reliability of search, experience and credence services: A scenario study
The purpose of this research is to add to our understanding of the antecedents of customer satisfaction by examining the effects of service reliability (Is the service âcorrectlyâ produced?) and service validity (Is the âcorrectâ service produced?) of search, experience and credence services.\ud
Design/methodology/approach â Service validity and service reliability were manipulated in scenarios describing service encounters with different types of services. Customer satisfaction was measured using questionnaires.\ud
Findings â Service validity and service reliability independently affect customer satisfaction with search services. For experience services, service validity and service reliability are necessary conditions for customer satisfaction. For credence services, no effects of service validity were found but the effects of service reliability on customers' satisfaction were profound.\ud
Research limitations/implications â Scenarios provided a useful method to investigate customer evaluation of different types of service situations. A limitation of this method was that the participants were not observed in a real service situation but had to give their opinion on hypothetical scenarios.\ud
Practical implications â For search and credence services, it is possible to compensate low service validity by providing a highly reliable service. However, managers of experience services should be aware that little can be gained when either service validity or service reliability is faulty.\ud
Originality/value â The present study provides empirical data on the effects of service reliability and the thus far neglected effects of service validity and integrates these (new) concepts in the model of information verification
Implementing Toyota Production System (TPS) concept in a small automotive parts manufacturer
This study investigates the consequences of implementing Toyota Production System (TPS) in the local automotive parts manufacturer production line. The production line consisted of three different processes and two inter-process buffers. A verified base model was created using WITNESSTM computer simulation software. Reducing WIP is the primary objective of the study focusing on varying the sizes of inter-process buffers. Results generated from the simulation indicate that reducing inter-process buffers simultaneously would produce significant effect in reducing WIP compared to reducing each buffer independently
Multispectral Palmprint Encoding and Recognition
Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/code
Checking Computations of Formal Method Tools - A Secondary Toolchain for ProB
We present the implementation of pyB, a predicate - and expression - checker
for the B language. The tool is to be used for a secondary tool chain for data
validation and data generation, with ProB being used in the primary tool chain.
Indeed, pyB is an independent cleanroom-implementation which is used to
double-check solutions generated by ProB, an animator and model-checker for B
specifications. One of the major goals is to use ProB together with pyB to
generate reliable outputs for high-integrity safety critical applications.
Although pyB is still work in progress, the ProB/pyB toolchain has already been
successfully tested on various industrial B machines and data validation tasks.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Functional Verification of Power Electronic Systems
This project is the final work of the degree in Industrial Electronics and
Automatic Engineering. It has global concepts of electronics but it focuses
in power electronic systems.
There is a need for reliable testing systems to ensure the good functionality of power electronic systems. The constant evolution of this products
requires the development of new testing techniques. This project aims to develop a new testing system to accomplish the functional verification of a new
power electronic system manufactured on a company that is in the power
electronic sector . This test system consists on two test bed platforms, one
to test the control part of the systems and the other one to test their functionality. A software to perform the test is also designed. Finally, the testing
protocol is presented.
This design is validated and then implemented on a buck converter and
an inverter that are manufactured at the company. The results show that
the test system is reliable and is capable of testing the functional verification
of the two power electronic system successfully.
In summary, this design can be introduced in the power electronic production process to test the two products ensuring their reliability in the
market
A practical experience with independent verification and validation
One approach to reducing software cost and increasing reliability is the use of an independent verification and validation (IV & V) methodology. The Software Engineering Laboratory (SEL) applied the IV & V methodology to two medium-size flight dynamics software development projects. Then, to measure the effectiveness of the IV & V approach, the SEL compared these two projects with two similar past projects, using measures like productivity, reliability, and maintain ablilty. Results indicated that the use of the IV & V methodology did not help the overall process nor improve the product in these cases
Validation of hardware events for successful performance pattern identification in High Performance Computing
Hardware performance monitoring (HPM) is a crucial ingredient of performance
analysis tools. While there are interfaces like LIKWID, PAPI or the kernel
interface perf\_event which provide HPM access with some additional features,
many higher level tools combine event counts with results retrieved from other
sources like function call traces to derive (semi-)automatic performance
advice. However, although HPM is available for x86 systems since the early 90s,
only a small subset of the HPM features is used in practice. Performance
patterns provide a more comprehensive approach, enabling the identification of
various performance-limiting effects. Patterns address issues like bandwidth
saturation, load imbalance, non-local data access in ccNUMA systems, or false
sharing of cache lines. This work defines HPM event sets that are best suited
to identify a selection of performance patterns on the Intel Haswell processor.
We validate the chosen event sets for accuracy in order to arrive at a reliable
pattern detection mechanism and point out shortcomings that cannot be easily
circumvented due to bugs or limitations in the hardware
Quantile forecast discrimination ability and value
While probabilistic forecast verification for categorical forecasts is well
established, some of the existing concepts and methods have not found their
equivalent for the case of continuous variables. New tools dedicated to the
assessment of forecast discrimination ability and forecast value are introduced
here, based on quantile forecasts being the base product for the continuous
case (hence in a nonparametric framework). The relative user characteristic
(RUC) curve and the quantile value plot allow analysing the performance of a
forecast for a specific user in a decision-making framework. The RUC curve is
designed as a user-based discrimination tool and the quantile value plot
translates forecast discrimination ability in terms of economic value. The
relationship between the overall value of a quantile forecast and the
respective quantile skill score is also discussed. The application of these new
verification approaches and tools is illustrated based on synthetic datasets,
as well as for the case of global radiation forecasts from the high resolution
ensemble COSMO-DE-EPS of the German Weather Service
- âŠ