7 research outputs found
What information and the extent of information research participants need in informed consent forms: a multi-country survey
Background: The use of lengthy, detailed, and complex informed consent forms (ICFs) is of paramount concern in biomedical research as it may not truly promote the rights and interests of research participants. The extent of information in ICFs has been the subject of debates for decades; however, no clear guidance is given. Thus, the objective of this study was to determine the perspectives of research participants about the type and extent of information they need when they are invited to participate in biomedical research. Methods: This multi-center, cross-sectional, descriptive survey was conducted at 54 study sites in seven Asia-Pacific countries. A modified Likert-scale questionnaire was used to determine the importance of each element in the ICF among research participants of a biomedical study, with an anchored rating scale from 1 (not important) to 5 (very important). Results: Of the 2484 questionnaires distributed, 2113 (85.1%) were returned. The majority of respondents considered most elements required in the ICF to be \u27moderately important\u27 to \u27very important\u27 for their decision making (mean score, ranging from 3.58 to 4.47). Major foreseeable risk, direct benefit, and common adverse effects of the intervention were considered to be of most concerned elements in the ICF (mean score = 4.47, 4.47, and 4.45, respectively). Conclusions: Research participants would like to be informed of the ICF elements required by ethical guidelines and regulations; however, the importance of each element varied, e.g., risk and benefit associated with research participants were considered to be more important than the general nature or technical details of research. Using a participant-oriented approach by providing more details of the participant-interested elements while avoiding unnecessarily lengthy details of other less important elements would enhance the quality of the ICF
Recommended from our members
On quantizer design for Distributed Source Coding of Gaussian vector data with packet loss
Distributed Source Coding (DSC) has been widely studied in applications such as video coding and distributed sensor networks. However, DSC has not been widely explored for low delay and low bit rate applications such as quantization of speech Line Spectral Frequencies (LSFs). This is due to the difficulty of modeling and analyzing the effects of imperfect side information resulting from the previous packet losses, quantization noise and decoding errors. In this paper, we present methods for modeling, analyzing and designing Wyner-Ziv(WZ) quantizers for jointly Gaussian vector data with imperfect side information. In particular, we show the decomposition of the quantizer design problem for the vector data into independent scalar design subproblems. Then we demonstrate the analytical techniques to compute the optimum step size and bit allocation for each scalar dimension to minimize the decoder expected Mean Squared Error(MSE). The simulation results verify the analytical results obtained in this paper
Gaussian Mixture Kalman Predictive Coding of Line Spectral Frequencies
Gaussian mixture model (GMM)-based predictive coding of line spectral frequencies (LSFs) has gained wide acceptance. In such coders, each mixture of a GMM can be interpreted as defining a linear predictive transform coder. In this paper, we use Kalman filtering principles to model each of these linear predictive transform coders to present GMM Kalman predictive coding. In particular, we show how suitable modeling of quantization noise leads to an adaptive a posteriori GMM that defines a signal-adaptive predictive coder that provides improved coding of LSFs in comparison with the baseline recursive GMM predictive coder. Moreover, we show how running the GMM Kalman predictive coders to convergence can be used to design a stationary GMM Kalman predictive coding system which again provides improved coding of LSFs but now with only a modest increase in run-time complexity over the baseline. In packet loss conditions, this stationary GMM Kalman predictive coder provides much better performance than the recursive GMM predictive coder, and in fact has comparable mean performance to a memoryless GMM coder. Finally, we illustrate how one can utilize Kalman filtering principles to design a postfilter which enhances decoded vectors from a recursive GMM predictive coder without any modifications to the encoding process
A Kalman filtering approach to GMM predictive coding of LSFS for packet loss conditions
Gaussian Mixture Model (GMM)-based vector quantization of Line Spectral Frequencies (LSFs) has gained wide acceptance in speech coding. In predictive coding of LSFs, the GMM approach utilizing Kalman filtering principles to account for quantization noise has been shown to perform better than a baseline GMM Recursive Coder approaches for both clean and packet loss conditions at roughly the same complexity. However, the GMM Kalman based predictive coder was not specifically designed for operation in packet loss conditions. In this paper, we show how an initial GMM Kalman predictive coder can be utilized to obtain a robust GMM predictive coder specifically designed to operate in packet loss. In particular, we demonstrate how one can define sets of encoding and decoding modes, and design special Kalman encoding and decoding gains for each set. With this framework, GMM predictive coding design can be viewed as determining the special Kalman gains that minimize the expected minimum mean squared error at the decoder in packet loss conditions. The simulation results demonstrate that the proposed robust Kalman predictive coder achieves better performance than the baseline GMM predictive coders
Recommended from our members
Gaussian Mixture Kalman predictive coding of lsfs
Gaussian Mixture Model (GMM)-based predictive coding of line spectral frequencies (lsf's) has gained wide acceptance. In such coders, each mixture of a GMM can be interpreted as defining a linear predictive transform coder. In this paper we optimize each of these linear predictive transform coders using Kalman predictive coding techniques to present GMM Kalman predictive coding. In particular, we show how suitable modeling of quantization noise leads to an adaptive a-posteriori GMM that defines a signal-adaptive predictive coder that provides superior coding of lsfs in comparison with the baseline GMM predictive coder. Moreover, we show how running the Kalman predictive coders to convergence can be used to design a stationary predictive coding system which again provides superior coding of lsfs but now with no increase in run-time complexity over the baseline
Recommended from our members
Using Association Rules for Classification from Databases Having Class Label Ambiguities: A Belief Theoretic Method
This chapter introduces a belief theoretic method for classification from databases having class label ambiguities. It uses a set of association rules extracted from such a database. It is assumed that a training data set with an adequate number of pre-classified instances, where each instance is assigned with an integer class label, is available. We use a modified association rule mining (ARM) technique to extract the interesting rules from the training data set and use a belief theoretic classifier based on the extracted rules to classify the incoming feature vectors. The ambiguity modelling capability of belief theory enables our classifier to perform better in the presence of class label ambiguities. It can also address the issue of the training data set being unbalanced or highly skewed by ensuring that an approximately equal number of rules are generated for each class. All these capabilities make our classifier ideally suited for those applications where (1) different experts may have conflicting opinions about the class label to be assigned to a specific training data instance; and (2) the majority of the training data instances are likely to represent a few classes giving rise to highly skewed databases. Therefore, the proposed classifier would be extremely useful in security monitoring and threat classification environments where conflicting expert opinions about the threat level are common and only a few training data instances would be considered to pose a heightened threat level. Several experiments are conducted to evaluate our proposed classifier. These experiments use several databases from the UCI data repository and data sets collected from the airport terminal simulation platform developed at the Distributed Decision Environments (DDE) Laboratory at the Department of Electrical and Computer Engineering, University of Miami. The experimental results show that, while the proposed classifier’s performance is comparable to some existing classifiers when the databases have no class label ambiguities, it provides superior classification accuracy and better efficiency when class label ambiguities are present