136,454 research outputs found

    Investigating information systems with mixed-methods research

    Get PDF
    Mixed-methods research, which comprises both quantitative and qualitative components, is widely perceived as a means to resolve the inherent limitations of traditional single method designs and is thus expected to yield richer and more holistic findings. Despite such distinctive benefits and continuous advocacy from Information Systems (IS) researchers, the use of mixed-methods approach in the IS field has not been high. This paper discusses some of the key reasons that led to this low application rate of mixed-methods design in the IS field, ranging from misunderstanding the term with multiple-methods research to practical difficulties for design and implementation. Two previous IS studies are used as examples to illustrate the discussion. The paper concludes by recommending that in order to apply mixed-methods design successfully, IS researchers need to plan and consider thoroughly how the quantitative and qualitative components (i.e. from data collection to data analysis to reporting of findings) can be genuinely integrated together and supplement one another, in relation to the predefined research questions and the specific research contexts

    Understanding the perception of very small software companies towards the adoption of process standards

    Get PDF
    This paper is concerned with understanding the issues that affect the adoption of software process standards by Very Small Entities (VSEs), there needs from process standards and there willingness to engage with the new ISO/IEC 29110 standard in particular. In order to achieve this goal, a series of industry data collection studies were undertaken with a collection of VSEs. A twin track approach of a qualitative data collection (interviews and focus groups) and quantitative data collection (questionnaire), with data analysis being completed separately and finally results merged, using the coding mechanisms of grounded theory. This paper serves as a roadmap for both researchers wishing to understand the issues of process standards adoption by very small companies and also for the software process standards community

    Statistical methods for tissue array images - algorithmic scoring and co-training

    Full text link
    Recent advances in tissue microarray technology have allowed immunohistochemistry to become a powerful medium-to-high throughput analysis tool, particularly for the validation of diagnostic and prognostic biomarkers. However, as study size grows, the manual evaluation of these assays becomes a prohibitive limitation; it vastly reduces throughput and greatly increases variability and expense. We propose an algorithm - Tissue Array Co-Occurrence Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on textural regularity summarized by local inter-pixel relationships. The algorithm can be easily trained for any staining pattern, is absent of sensitive tuning parameters and has the ability to report salient pixels in an image that contribute to its score. Pathologists' input via informative training patches is an important aspect of the algorithm that allows the training for any specific marker or cell type. With co-training, the error rate of TACOMA can be reduced substantially for a very small training sample (e.g., with size 30). We give theoretical insights into the success of co-training via thinning of the feature set in a high-dimensional setting when there is "sufficient" redundancy among the features. TACOMA is flexible, transparent and provides a scoring process that can be evaluated with clarity and confidence. In a study based on an estrogen receptor (ER) marker, we show that TACOMA is comparable to, or outperforms, pathologists' performance in terms of accuracy and repeatability.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS543 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Exploiting Qualitative Information for Decision Support in Scenario Analysis

    Get PDF
    The development of scenario analysis (SA) to assist decision makers and stakeholders has been growing over the last few years through mainly exploiting qualitative information provided by experts. In this study, we present SA based on the use of qualitative data for strategy planning. We discuss the potential of SA as a decision-support tool, and provide a structured approach for the interpretation of SA data, and an empirical validation of expert evaluations that can help to measure the consistency of the analysis. An application to a specific case study is provided, with reference to the European organic farming business

    A methodology for analysing and evaluating narratives in annual reports: a comprehensive descriptive profile and metrics for disclosure quality attributes

    Get PDF
    There is a consensus that the business reporting model needs to expand to serve the changing information needs of the market and provide the information required for enhanced corporate transparency and accountability. Worldwide, regulators view narrative disclosures as the key to achieving the desired step-change in the quality of corporate reporting. In recent years, accounting researchers have increasingly focused their efforts on investigating disclosure and it is now recognised that there is an urgent need to develop disclosure metrics to facilitate research into voluntary disclosure and quality [Core, J. E. (2001). A review of the empirical disclosure literature. Journal of Accounting and Economics, 31(3), 441–456]. This paper responds to this call and contributes in two principal ways. First, the paper introduces to the academic literature a comprehensive four-dimensional framework for the holistic content analysis of accounting narratives and presents a computer-assisted methodology for implementing this framework. This procedure provides a rich descriptive profile of a company's narrative disclosures based on the coding of topic and three type attributes. Second, the paper explores the complex concept of quality, and the problematic nature of quality measurement. It makes a preliminary attempt to identify some of the attributes of quality (such as relative amount of disclosure and topic spread), suggests observable proxies for these and offers a tentative summary measure of disclosure quality

    Compressive Sensing for Dynamic XRF Scanning

    Full text link
    X-Ray Fluorescence (XRF) scanning is a widespread technique of high importance and impact since it provides chemical composition maps crucial for several scientific investigations. There are continuous requirements for larger, faster and highly resolved acquisitions in order to study complex structures. Among the scientific applications that benefit from it, some of them, such as wide scale brain imaging, are prohibitively difficult due to time constraints. However, typically the overall XRF imaging performance is improving through technological progress on XRF detectors and X-ray sources. This paper suggests an additional approach where XRF scanning is performed in a sparse way by skipping specific points or by varying dynamically acquisition time or other scan settings in a conditional manner. This paves the way for Compressive Sensing in XRF scans where data are acquired in a reduced manner allowing for challenging experiments, currently not feasible with the traditional scanning strategies. A series of different compressive sensing strategies for dynamic scans are presented here. A proof of principle experiment was performed at the TwinMic beamline of Elettra synchrotron. The outcome demonstrates the potential of Compressive Sensing for dynamic scans, suggesting its use in challenging scientific experiments while proposing a technical solution for beamline acquisition software.Comment: 16 pages, 7 figures, 1 tabl
    • 

    corecore