470 research outputs found

    Evaluating readability as a factor in information security policies

    Get PDF
    This thesis was previously held under moratorium from 26/11/19 to 26/11/21Policies should be treated as rules or principles that individuals can readily comprehend and follow as a pre-requisite to any organisational requirement to obey and enact regulations. This dissertation attempts to highlight one of the important factors to consider before issuing any policy that staff members are required to follow. Presently, there is no ready mechanism for estimating the likely efficacy of such policies across an organisation. One factor that has a plausible impact upon the comprehensibility of policies is their readability. Researchers have designed a number of software readability metrics that evaluate how difficult a passage is to comprehend; yet, little is known about the impact of readability on the interpretation of information security policies and whether analysis of readability may prove to be a useful insight. This thesis describes the first study to investigate the feasibility of applying readability metrics as an indicator of policy comprehensibility through a mixed methods approach, with the formulation and implementation of a seven phase sequential exploratory fully mixed methods design. Each one was established in light of the outcomes from the previous phase. The methodological approach of this research study is one of the distinguishing characteristics reported in the thesis, which was as follows: * eight policies were selected (from a combination of academia and industry sector institutes); * specialists were requested their insights on key policy elements; * focus group interviews were conducted; * comprehension tests were developed (Cloze tests); * a pilot study of comprehension tests was organised (preceded by a small-scale test); * a main study of comprehension tests was performed with 600 participants and reduce that for validation to 396; * a comparison was made of comprehension results against readability metrics. The results reveal that the traditional readability metrics are ineffective in predicting human estimation. Nevertheless, readability, as measured using a bespoke readability metric, may yield useful insight upon the likely difficulty that end-users may face in comprehending a written text. Thereby, our study aims to provide an effective approach to enhancing the comprehensibility of information security policies and afford a facility for future research in this area. The research contributes to our understanding of readability in general and offering an optimal technique to measure the readability in particular. We recommend immediate corrective actions to enhance the ease of comprehension for information security policies. In part, this may reduce instances where users avoid fully reading the information security policies, and may also increase the likelihood of user compliance. We suggest that the application of appropriately selected readability assessment may assist policy makers to test their draft policies for ease of comprehension before policy release. Indeed, there may be grounds for a readability compliance test that future information security policies must satisfy.Policies should be treated as rules or principles that individuals can readily comprehend and follow as a pre-requisite to any organisational requirement to obey and enact regulations. This dissertation attempts to highlight one of the important factors to consider before issuing any policy that staff members are required to follow. Presently, there is no ready mechanism for estimating the likely efficacy of such policies across an organisation. One factor that has a plausible impact upon the comprehensibility of policies is their readability. Researchers have designed a number of software readability metrics that evaluate how difficult a passage is to comprehend; yet, little is known about the impact of readability on the interpretation of information security policies and whether analysis of readability may prove to be a useful insight. This thesis describes the first study to investigate the feasibility of applying readability metrics as an indicator of policy comprehensibility through a mixed methods approach, with the formulation and implementation of a seven phase sequential exploratory fully mixed methods design. Each one was established in light of the outcomes from the previous phase. The methodological approach of this research study is one of the distinguishing characteristics reported in the thesis, which was as follows: * eight policies were selected (from a combination of academia and industry sector institutes); * specialists were requested their insights on key policy elements; * focus group interviews were conducted; * comprehension tests were developed (Cloze tests); * a pilot study of comprehension tests was organised (preceded by a small-scale test); * a main study of comprehension tests was performed with 600 participants and reduce that for validation to 396; * a comparison was made of comprehension results against readability metrics. The results reveal that the traditional readability metrics are ineffective in predicting human estimation. Nevertheless, readability, as measured using a bespoke readability metric, may yield useful insight upon the likely difficulty that end-users may face in comprehending a written text. Thereby, our study aims to provide an effective approach to enhancing the comprehensibility of information security policies and afford a facility for future research in this area. The research contributes to our understanding of readability in general and offering an optimal technique to measure the readability in particular. We recommend immediate corrective actions to enhance the ease of comprehension for information security policies. In part, this may reduce instances where users avoid fully reading the information security policies, and may also increase the likelihood of user compliance. We suggest that the application of appropriately selected readability assessment may assist policy makers to test their draft policies for ease of comprehension before policy release. Indeed, there may be grounds for a readability compliance test that future information security policies must satisfy

    Framework to Automatically Determine the Quality of Open Data Catalogs

    Full text link
    Data catalogs play a crucial role in modern data-driven organizations by facilitating the discovery, understanding, and utilization of diverse data assets. However, ensuring their quality and reliability is complex, especially in open and large-scale data environments. This paper proposes a framework to automatically determine the quality of open data catalogs, addressing the need for efficient and reliable quality assessment mechanisms. Our framework can analyze various core quality dimensions, such as accuracy, completeness, consistency, scalability, and timeliness, offer several alternatives for the assessment of compatibility and similarity across such catalogs as well as the implementation of a set of non-core quality dimensions such as provenance, readability, and licensing. The goal is to empower data-driven organizations to make informed decisions based on trustworthy and well-curated data assets. The source code that illustrates our approach can be downloaded from https://www.github.com/jorge-martinez-gil/dataq/.Comment: 25 page

    A proposed conceptual basis for mode 2 business and management research and development projects based on design science research principles

    Get PDF
    Due to progressing digitalisation and automatisation, the disciplines of Information Systems and Business / Management will increasingly be merging. It is assumed, therefore, that in Business and Management Research (BMR), there will be a greater demand for artefacts such as conceptual models, particularly in collaborative, Mode 2 research and development projects. Such endeavours require adequate conceptual frameworks, catering for diverse, creative and iterative steps including complementary (multi-)method application in order to handle complexity, uncertainty, user engagement and differing assumptions in a fast-paced environment. They need to be able to do this while rigorously addressing questions in their field or organisation of professionals. Design Science Research (DSR) has been suggested as an suitable approach to fulfil these needs. While numerous examples of applying DSR principles have been reported with respect to Information Systems Research (ISR), the application in BMR has so far been rather modest. This article presents a conceptual basis of DSR principles to apply in Mode 2 BMR artefact development projects, accompanied by a framework for a systematic quality evaluation. By doing so, the article contributes to the advancement of the emerging convergence of BMR and ISR by presenting guidelines embracing iterative and systematic procedures for BMR and ISR researchers

    A framework for the analysis and evaluation of enterprise models

    Get PDF
    Bibliography: leaves 264-288.The purpose of this study is the development and validation of a comprehensive framework for the analysis and evaluation of enterprise models. The study starts with an extensive literature review of modelling concepts and an overview of the various reference disciplines concerned with enterprise modelling. This overview is more extensive than usual in order to accommodate readers from different backgrounds. The proposed framework is based on the distinction between the syntactic, semantic and pragmatic model aspects and populated with evaluation criteria drawn from an extensive literature survey. In order to operationalize and empirically validate the framework, an exhaustive survey of enterprise models was conducted. From this survey, an XML database of more than twenty relatively large, publicly available enterprise models was constructed. A strong emphasis was placed on the interdisciplinary nature of this database and models were drawn from ontology research, linguistics, analysis patterns as well as the traditional fields of data modelling, data warehousing and enterprise systems. The resultant database forms the test bed for the detailed framework-based analysis and its public availability should constitute a useful contribution to the modelling research community. The bulk of the research is dedicated to implementing and validating specific analysis techniques to quantify the various model evaluation criteria of the framework. The aim for each of the analysis techniques is that it can, where possible, be automated and generalised to other modelling domains. The syntactic measures and analysis techniques originate largely from the disciplines of systems engineering, graph theory and computer science. Various metrics to measure model hierarchy, architecture and complexity are tested and discussed. It is found that many are not particularly useful or valid for enterprise models. Hence some new measures are proposed to assist with model visualization and an original "model signature" consisting of three key metrics is proposed.Perhaps the most significant contribution ofthe research lies in the development and validation of a significant number of semantic analysis techniques, drawing heavily on current developments in lexicography, linguistics and ontology research. Some novel and interesting techniques are proposed to measure, inter alia, domain coverage, model genericity, quality of documentation, perspicuity and model similarity. Especially model similarity is explored in depth by means of various similarity and clustering algorithms as well as ways to visualize the similarity between models. Finally, a number of pragmatic analyses techniques are applied to the models. These include face validity, degree of use, authority of model author, availability, cost, flexibility, adaptability, model currency, maturity and degree of support. This analysis relies mostly on the searching for and ranking of certain specific information details, often involving a degree of subjective interpretation, although more specific quantitative procedures are suggested for some of the criteria. To aid future researchers, a separate chapter lists some promising analysis techniques that were investigated but found to be problematic from methodological perspective. More interestingly, this chapter also presents a very strong conceptual case on how the proposed framework and the analysis techniques associated vrith its various criteria can be applied to many other information systems research areas. The case is presented on the grounds of the underlying isomorphism between the various research areas and illustrated by suggesting the application of the framework to evaluate web sites, algorithms, software applications, programming languages, system development methodologies and user interfaces

    Quality in Journalism: Perceptions and Practice in an Indian context

    Get PDF
    This thesis explores the concept of quality in journalism from an Indian perspective with the aim of identifying its elements and the factors influencing it. It is framed in a mixed methods paradigm and uses ‘surface structures’ and ‘story boxes’ as tools to study the perceptions and practice of quality in Indian journalism. Qualitative semi-structured interviews with 22 Indian newspaper journalists and quantitative content analysis of 108 newspaper pages and 569 news items are used to identify an ideal-practice gap between journalists' perceptions of quality and the evidence of it in news content. The research methods are informed by normative assumptions of quality and based on journalism's democratic role and functions. Findings are derived using the principles of applied thematic analysis to identify core themes and sub-themes in qualitative data and from descriptive statistical analysis of quantitative data. This thesis identifies the core elements of quality, which are closely linked to and influenced by the shared professional values of Indian journalists, such as autonomy, objectivity and public service. The content analysis shows little evidence of idealistic perceptions of quality, with notions of quality at the journalists' level converging with content only in four minor aspects and differing in the six critical aspects of accuracy, balance, context, good writing and the informative and investigative roles of journalism

    Towards Design Theory for Accessible IT Artefacts

    Get PDF
    Accessibility in the use of information technology (IT) artefacts, such as websites, applications, and user interfaces, means that they are designed in such a way that people with the broadest range of abilities can use them. However, although accessibility is a human right, IT artefacts often remain inaccessible. Aside from the available accessibility guidelines, we need sufficient design theories that explicitly state how accessibility should be addressed and designed to develop accessible IT artefacts for all users. This dissertation summarises four articles that address this problem. These studies are conducted with qualitative approaches that include a narrative literature review, a systematic literature review and a design science method comprising a participatory design and interviews. The first article develops an explaining theory of accessibility to gain an understanding of the construct of accessibility, showing possible variables of human abilities, tasks and contexts and their relationships in IT use. The second article illustrates the factors in management, development, user, and IT artefact features, including the roles and actions that these domains have and how they affect the realisation of accessibility. The other two articles contribute to accessibility guidance to improve and support content creators’ text production and writing process of accessible online text in the web context. The dissertation underscores three key determinants of the knowledge of accessibility: (1) assumptions of users’ abilities; (2) users’ actual needs; and (3) factors in the development chain. The foregoing factors contribute to the knowledge of accessibility and would help researchers, particularly design scientists, form prescriptive knowledge for practitioners to achieve accessible IT artefacts. Thus, researchers could better identify the variables, relationships and affecting factors in human abilities, management, development, content creation, tasks, and contexts that need to be addressed when designing IT artefacts for certain tasks and use contexts.Informaatioteknologia-artefaktien (IT-artefaktien), kuten verkkosivustojen, sovellusten ja käyttöliittymien saavutettavuus tarkoittaa sitä, että ihmiset erilaisine ominaisuuksineen ja kykyineen voivat käyttää niitä. Vaikka saavutettavuus on ihmisoikeus, IT-artefaktit eivät kuitenkaan ole aina saavutettavia. Käytettävissä olevista saavutettavuusohjeista huolimatta tarvitsemme suunnitteluteorioita, jotka ohjaavat IT-artefaktien suunnittelua, jotta niistä tulisi saavutettavia kaikille IT-artefaktin käyttäjille. Tämä väitöskirja on yhteenveto neljästä artikkelista, jotka käsittelevät tätä ongelmaa. Tutkimukset ovat tehty laadullisilla menetelmillä, joihin on sisältynyt narratiivinen kirjallisuuskatsaus, systemaattinen kirjallisuuskatsaus sekä suunnittelutieteellinen menetelmä sisältäen osallistavan suunnittelun ja haastattelut. Ensimmäisessä artikkelissa kehitetään kuvaileva saavutettavuuden teoria, jolla saadaan käsitys saavutettavuuden rakenteesta ja joka näyttää mahdolliset muuttujat ihmisen kyvyissä, tehtävissä ja konteksteissa, sekä niiden väliset suhteet. Toinen artikkeli kuvaa saavutettavuuteen vaikuttavia tekijöitä johtamisen, kehityksen, käyttäjän ja IT-artefaktin ominaisuuksien näkökulmista, mukaan lukien roolit ja toimenpiteet, joita näillä kohteilla on. Kaksi muuta artikkelia kehittävät ohjeistuksen sisällöntuottajien työn tueksi saavutettavan verkkotekstin tuottamiseksi. Väitöskirjassa esitetään kolme ratkaisevaa tekijää saavutettavuuden tietämyksessä: (1) olettamukset käyttäjien kyvyistä (2) käyttäjien todelliset tarpeet ja (3) tekijät kehitysketjussa. Näiden tekijöiden tuntemus auttaa erityisesti suunnittelutieteilijöitä muodostamaan ohjaavaa tietoa ammattilaisille saavutettavien IT-artefaktien saavuttamiseksi. Täten tutkijat voivat paremmin tunnistaa muuttujat, niiden väliset suhteet ja saavutettavuuteen vaikuttavat tekijät, jotka liittyvät käyttäjän kykyihin, johtamiseen, kehittämiseen, sisällöntuottamiseen, tehtäviin ja kontekstiin, kun IT-artefaktia suunnitellaan tiettyä tehtävää ja käyttökontekstia varten.fi=vertaisarvioitu|en=peerReviewed

    Towards a new model of readability

    Get PDF
    This thesis attempts to develop a new model for a renewed concept of readability. The thesis begins by discussing the rationale for carrying out this research. Next, the extensive literature around the topic of readability is reviewed. The literature suggests that most research into readability has stemmed from a positivist paradigm, and has used quantitative methods to assess text comprehensibility. This approach has been widely criticised and, recently, more qualitative methods stemming from an interpretive paradigm have been employed. It seems that both quantitative and qualitative methods have strengths and limitations. Therefore, the research I have carried out has explored the concept of readability by combining these two research approaches. The data collection methods include readability formulae; text feature analyses; miscue analyses; retellings and interviews. This research has been conducted in the United Kingdom and involved 16 male and 16 female pupils with an age range from 6 to 11 years old. All the participants were fluent readers. Data were analysed using; (1) six online readability formulae - ATOS (1997); Dale-Chall (1948); Flesch-Kincaid (1948); FOG (1952); SMOG (1969); and Spache (1953); (2) Reading Miscue Inventory (Goodman, Watson & Burke, 2005); (3) Judging Richness of Retellings (Irwin & Mitchell, 1983); (4) text feature analysis forms; and (5) a cross-interview analysis approach. Two computer software programmes i.e Statistical Package for the Social Sciences (SPSS 17) and Qualitative Data Analysis (Nvivo 7) were used to organise and analyse the quantitative and qualitative data. The findings suggest that the concept of readability is influenced by both reader and text factors. The reader factors involve a complex relationship of nine embedded elements within the reader, namely interest, prior knowledge, attitude, reading ability, motivation, purpose of reading, engagement, age and gender. The text factors include eight elements, these being the physical features of the text, genre, content, author, linguistic difficulties, legibility, illustrations and organization of the text. This research comes to the conclusion that the concept of readability is a complex matching process involving the dynamic interaction between both reader and text factors and bound by certain contexts

    An investigation of the English language demands of mathematical texts on data handling used in intermediate phase mathematics

    Get PDF
    In the Intermediate Phase the majority of South African learners are transitioning from learning in their mother tongue to learning in English as well as from learning to read to reading to learn, and this is a major challenge. Textbooks are a key mediating artefact in the learning of mathematics and they present a challenge to the learner in terms of the language comprehension demands The data handling sections of mathematics textbooks are particularly dense in text. This is an important part of the mathematics curriculum as it is the beginning of statistical literacy learning. We need to be able to question, evaluate claims based on data, create arguments we can defend and use data meaningfully, it is thus crucial that learners acquire statistical literacy. This research sets out to examine the text in the data handling sections of four Intermediate Phase Mathematics book series in order to answer the following research question: What are the language comprehension demands of English mathematical texts on data handling that are used in South African Intermediate Phase Mathematics? The theories framing the study are Vygotsky’s sociocultural theory and Cummins’ second language acquisition theory. It is an interpretivist mixed method case study that takes the form of a document analysis.. The findings indicate that many units in the books analysed have a higher readability level than the grade level and will thus present a challenge to learners in terms of their ability to access the mathematical content. An analysis of the linguistic complexity revealed that the features contributing most to the complexity of the texts included words with seven or more letters, prepositional phrases, infinitives, complex verbs and complex/compound sentences. An examination of the non-textual elements revealed that most of them are accurate, connected, concise, contextual and these add to the comprehensibility although there were a few which could be possibly be distractors. It is hoped that the empirical findings of this study, will sensitise educators and publishers involved with the design of textbooks and workbooks to the type of language currently found and that they might give attention to the needs of English language learners when developing these texts.Thesis (MEd) -- Faculty of Education, Education, 202
    corecore