8,651 research outputs found

    How people find videos

    Get PDF
    At present very little is known about how people locate and view videos 'in the wild'. This study draws a rich picture of everyday video seeking strategies and video information needs, based on an ethnographic study of New Zealand university students. These insights into the participants' activities and motivations suggest potentially useful facilities for a video digital library

    Finding video on the web

    Get PDF
    At present very little is known about how people locate and view videos. This study draws a rich picture of everyday video seeking strategies and video information needs, based on an ethnographic study of New Zealand university students. These insights into the participants’ activities and motivations suggest potentially useful facilities for a video digital library

    Hearing the Hidden Agenda: The Ethnographic Investigation of Procedure

    Get PDF
    Laser Doppler flowmetry (LDF) is virtually the only non-invasive technique, except for other laser speckle based techniques, that enables estimation of the microcirculatory blood flow. The technique was introduced into the field of biomedical engineering in the 1970s, and a rapid evolvement followed during the 1980s with fiber based systems and improved signal analysis. The first imaging systems were presented in the beginning of the 1990s. Conventional LDF, although unique in many aspects and elegant as a method, is accompanied by a number of limitations that may have reduced the clinical impact of the technique. The analysis model published by Bonner and Nossal in 1981, which is the basis for conventional LDF, is limited to measurements given in arbitrary and relative units, unknown and non-constant measurement volume, non-linearities at increased blood tissue fractions, and a relative average velocity estimate. In this thesis a new LDF analysis method, quantitative LDF, is presented. The method is based on recent models for light-tissue interaction, comprising the current knowledge of tissue structure and optical properties, making it fundamentally different from the Bonner and Nossal model. Furthermore and most importantly, the method eliminates or highly reduces the limitations mentioned above. Central to quantitative LDF is Monte Carlo (MC) simulations of light transport in tissue models, including multiple Doppler shifts by red blood cells (RBC). MC was used in the first proof-of-concept study where the principles of the quantitative LDF were tested using plastic flow phantoms. An optically and physiologically relevant skin model suitable for MC was then developed. MC simulations of that model as well as of homogeneous tissue relevant models were used to evaluate the measurement depth and volume of conventional LDF systems. Moreover, a variance reduction technique enabling the reduction of simulation times in orders of magnitudes for imaging based MC setups was presented. The principle of the quantitative LDF method is to solve the reverse engineering problem of matching measured and calculated Doppler power spectra at two different source-detector separations. The forward problem of calculating the Doppler power spectra from a model is solved by mixing optical Doppler spectra, based on the scattering phase functions and the velocity distribution of the RBC, from various layers in the model and for various amounts of Doppler shifts. The Doppler shift distribution is calculated based on the scattering coefficient of the RBC:s and the path length distribution of the photons in the model, where the latter is given from a few basal MC simulations. When a proper spectral matching is found, via iterative model parameters updates, the absolute measurement data are given directly from the model. The concentration is given in g RBC/100 g tissue, velocities in mm/s, and perfusion in g RBC/100 g tissue × mm/s. The RBC perfusion is separated into three velocity regions, below 1 mm/s, between 1 and 10 mm/s, and above 10 mm/s. Furthermore, the measures are given for a constant output volume of a 3 mm3 half sphere, i.e. within 1.13 mm from the light emitting fiber of the measurement probe. The quantitative LDF method was used in a study on microcirculatory changes in type 2 diabetes. It was concluded that the perfusion response to a local increase in skin temperature, a response that is reduced in diabetes, is a process involving only intermediate and high flow velocities and thus relatively large vessels in the microcirculation. The increased flow in higher velocities was expected, but could not previously be demonstrated with conventional LDF. The lack of increase in low velocity flow indicates a normal metabolic demand during heating. Furthermore, a correlation between the perfusion at low and intermediate flow velocities and diabetes duration was found. Interestingly, these correlations were opposites (negative for the low velocity region and positive for the mediate velocity region). This finding is well in line with the increased shunt flow and reduced nutritive capillary flow that has previously been observed in diabetes

    Reflecting on E-Recruiting Research Using Grounded Theory

    Get PDF
    This paper presents a systematic review of the e-Recruiting literature through a grounded theory lens. The large number of publications and the increasing diversity of publications on e-Recruiting research, as the most studied area within e-HRM (Electronic Human Resource Management), calls for a synthesis of e-Recruiting research. We show interconnections between achievements, research gaps and future research directions in order to advance both e-Recruiting research and practice. Moreover, we provide a definition of e-Recruiting. The use of grounded theory enabled us to reach across sub-disciplines, methods used, perspectives studied, themes discussed and stakeholders involved. We demonstrate that the Grounded Theory Approach led to a better understanding of the interconnections that lay buried in the disparate e-Recruiting literature

    Data-Seeking Behaviour in the Social Sciences

    Get PDF
    Purpose: Publishing research data for reuse has become good practice in recent years. However, not much is known on how researchers actually find said data. In this exploratory study, we observe the information-seeking behaviour of social scientists searching for research data to reveal impediments and identify opportunities for data search infrastructure. Methods: We asked 12 participants to search for research data and observed them in their natural environment. The sessions were recorded. Afterwards, we conducted semi-structured interviews to get a thorough understanding of their way of searching. From the recordings, we extracted the interaction behaviour of the participants and analysed the spoken words both during the search task and the interview by creating affinity diagrams. Results: We found that literature search is more closely intertwined with dataset search than previous literature suggests. Both the search itself and the relevance assessment are very complex, and many different strategies are employed, including the creatively "misuse" of existing tools, since no appropriate tools exist or are unknown to the participants. Conclusion: Many of the issues we found relate directly or indirectly to the application of the FAIR principles, but some, like a greater need for dataset search literacy, go beyond that. Both infrastructure and tools offered for dataset search could be tailored more tightly to the observed work processes, particularly by offering more interconnectivity between datasets, literature, and other relevant materials

    Evaluating Generative Ad Hoc Information Retrieval

    Full text link
    Recent advances in large language models have enabled the development of viable generative information retrieval systems. A generative retrieval system returns a grounded generated text in response to an information need instead of the traditional document ranking. Quantifying the utility of these types of responses is essential for evaluating generative retrieval systems. As the established evaluation methodology for ranking-based ad hoc retrieval may seem unsuitable for generative retrieval, new approaches for reliable, repeatable, and reproducible experimentation are required. In this paper, we survey the relevant information retrieval and natural language processing literature, identify search tasks and system architectures in generative retrieval, develop a corresponding user model, and study its operationalization. This theoretical analysis provides a foundation and new insights for the evaluation of generative ad hoc retrieval systems.Comment: 14 pages, 5 figures, 1 tabl
    corecore