50,820 research outputs found
Do instructional attributes pose multicollinearity problems? An empirical exploration
It is commonly perceived that variables ‘measuring’ different dimensions of teaching (construed as instructional attributes) used in student evaluation of teaching (SET) questionnaires are so highly correlated that they pose a serious multicollinearity problem for quantitative analysis including regression analysis. Using nearly 12000 individual student responses to SET questionnaires and ten key dimensions of teaching and 25 courses at various undergraduate and postgraduate levels for multiple years at a large Australian university, this paper investigates whether this is indeed the case and if so under what circumstances. This paper tests this proposition first by examining variance inflation factors (VIFs), across courses, levels and over time using individual responses; and secondly by using class averages. In the first instance, the paper finds no sustainable evidence of multicollinearity. While, there were one or two isolated cases of VIFs marginally exceeding the conservative threshold of 5, in no cases did the VIFs for any of the instructional attributes come anywhere close to the high threshold value of 10. In the second instance, however, the paper finds that the attributes are highly correlated as all the VIFs exceed 10. These findings have two implications: (a) given the ordinal nature of the data ordered probit analysis using individual student responses can be employed to quantify the impact of instructional attributes on TEVAL score; (b) Data based on class averages cannot be used for probit analysis. An illustrative exercise using level 2 undergraduate courses data suggests higher TEVAL scores depend first and foremost on improving explanation, presentation, and organization of lecture materials.Multicollinearity, variance inflation factor, instructional attributes, threshold, Australia
Do instructional attributes pose multicollinearity problems? An empirical exploration
It is commonly perceived that variables ‘measuring’ different dimensions of teaching (construed as instructional attributes) used in student evaluation of teaching (SET) questionnaires are so highly correlated that they pose a serious multicollinearity problem for quantitative analysis including regression analysis. Using nearly 12000 individual student responses to SET questionnaires and ten key dimensions of teaching and 25 courses at various undergraduate and postgraduate levels for multiple years at a large Australian university, this paper investigates whether this is indeed the case and if so under what circumstances. This paper tests this proposition first by examining variance inflation factors (VIFs), across courses, levels and over time using individual responses; and secondly by using class averages. In the first instance, the paper finds no sustainable evidence of multicollinearity. While, there were one or two isolated cases of VIFs marginally exceeding the conservative threshold of 5, in no cases did the VIFs for any of the instructional attributes come anywhere close to the high threshold value of 10. In the second instance, however, the paper finds that the attributes are highly correlated as all the VIFs exceed 10. These findings have two implications: (a) given the ordinal nature of the data ordered probit analysis using individual student responses can be employed to quantify the impact of instructional attributes on TEVAL score; (b) Data based on class averages cannot be used for probit analysis. An illustrative exercise using level 2 undergraduate courses data suggests higher TEVAL scores depend first and foremost on improving explanation, presentation, and organization of lecture materials
Towards Validating Risk Indicators Based on Measurement Theory (Extended version)
Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations
Recommended from our members
A survey of clustering methods
In this paper, I describe a large variety of clustering methods within a single framework. This paper unifies work across different fields, from biology (numerical taxonomy) to machine learning (concept formation). An important objective for this paper is to show that one can benefit by a knowledge of research across different disciplines. After describing the task from a set of different viewpoints or paradigms, I begin by describing the similarity measures or evaluation functions that form the basis of any clustering technique. Next, I describe a number of different algorithms that use these measures, and I close with a brief discussion of ways to evaluate different approaches to clustering
Towards Validating Risk Indicators Based on Measurement Theory
Due to the lack of quantitative information and for cost-efficiency purpose, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators.\ud
In practice it is common to validate risk scales by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk scales that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of measurement theory to risk indicators, we analyze the indicators used by a particular risk assessment method specially developed for assessing confidentiality risks in networks of organizations
Union Mediation and Adaptation to Reciprocal Loyalty Arrangements
This study assesses the industrial relations application of the “loyalty-exit-voice” proposition. The loyalty concept is linked to reciprocal employer-employee arrangements and examined as a job attribute in a vignette questionnaire distributed to low and medium-skilled employees. The responses provided by employees in three European countries indicate that reciprocal loyalty arrangements, which involve the exchange of higher effort for job security, are one of the most desirable job attributes. This attribute exerts a higher impact on the job evaluations provided by unionised workers, compared to their non-union counterparts. This pattern is robust to a number of methodological considerations. It appears to be an outcome of adaptation to union mediated cooperation. Overall the evidence suggests that the loyalty-job evaluation profiles of unionised workers are receptive to repeated interaction and negative shocks, such as unemployment experience. This is not the case for the non-union workers. Finally, unionised workers appear to “voice” a lower job satisfaction, but exhibit low “exit” intentions, compared to the non-unionised labour.EPICURUS, a project supported by the European Commission through the 5th Framework Programme “Improving Human Potential” (contract number: HPSE-CT-2002-00143
A fuzzy set preference model for market share analysis
Consumer preference models are widely used in new product design, marketing management, pricing, and market segmentation. The success of new products depends on accurate market share prediction and design decisions based on consumer preferences. The vague linguistic nature of consumer preferences and product attributes, combined with the substantial differences between individuals, creates a formidable challenge to marketing models. The most widely used methodology is conjoint analysis. Conjoint models, as currently implemented, represent linguistic preferences as ratio or interval-scaled numbers, use only numeric product attributes, and require aggregation of individuals for estimation purposes. It is not surprising that these models are costly to implement, are inflexible, and have a predictive validity that is not substantially better than chance. This affects the accuracy of market share estimates. A fuzzy set preference model can easily represent linguistic variables either in consumer preferences or product attributes with minimal measurement requirements (ordinal scales), while still estimating overall preferences suitable for market share prediction. This approach results in flexible individual-level conjoint models which can provide more accurate market share estimates from a smaller number of more meaningful consumer ratings. Fuzzy sets can be incorporated within existing preference model structures, such as a linear combination, using the techniques developed for conjoint analysis and market share estimation. The purpose of this article is to develop and fully test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation), and how much to make (market share prediction)
- …