9,228 research outputs found

    Improving the normalization of complex interventions: measure development based on normalization process theory (NoMAD): study protocol

    Get PDF
    <b>Background</b> Understanding implementation processes is key to ensuring that complex interventions in healthcare are taken up in practice and thus maximize intended benefits for service provision and (ultimately) care to patients. Normalization Process Theory (NPT) provides a framework for understanding how a new intervention becomes part of normal practice. This study aims to develop and validate simple generic tools derived from NPT, to be used to improve the implementation of complex healthcare interventions.<p></p> <b>Objectives</b> The objectives of this study are to: develop a set of NPT-based measures and formatively evaluate their use for identifying implementation problems and monitoring progress; conduct preliminary evaluation of these measures across a range of interventions and contexts, and identify factors that affect this process; explore the utility of these measures for predicting outcomes; and develop an online users’ manual for the measures.<p></p> <b>Methods</b> A combination of qualitative (workshops, item development, user feedback, cognitive interviews) and quantitative (survey) methods will be used to develop NPT measures, and test the utility of the measures in six healthcare intervention settings.<p></p> <b>Discussion</b> The measures developed in the study will be available for use by those involved in planning, implementing, and evaluating complex interventions in healthcare and have the potential to enhance the chances of their implementation, leading to sustained changes in working practices

    MEPs online: Understanding communication strategies for remote representatives

    Get PDF
    This article explores the use of the Internet by Members of the European Parliament (MEPs), assessing the adoption of online communication as well as its strategic uses. In particular we analysed the websites, weblogs and social networking site profiles of all MEPs who linked to an online presence from the European parliament homepage, a total of 440 MEPs representing all 27 member nations. Through a thorough analysis of the content using a scheme designed to record the presence and functionality of 103 specific features and tools and recency of updates, we assess how MEPs use the Internet to connect with a range of audiences; from journalists to loyal supporters. We find MEPs embracing a range of features which would be appealing to a wide range of different visitors. There is a minor generational divide among MEPs based both on their age and the length of time their country has been a member of the European Union. However overall we suggest there is an ebb and flow of innovation within the online political communication of these parliamentarians

    Towards a non-hierarchical campaign? Testing for interactivity as a tool of election campaigning in France, the US, Germany and the UK.

    Get PDF
    Interest in the Internet and its role within political communication and election campaigning has now an established body of theoretical and empirical history, with mixed predictions and findings. The bulk of the empirical research has been in single countries, and where there has been comparative research it has tended to use a range of methodologies conducted by different authors. Largely, empirical studies have agreed with the politics as usual thesis, that political communication online is of a similar if not identical style to offline: top-down, information heavy and designed to persuade rather than consult with voters. The mass take-up of web 2.0 tools and platforms challenges this approach, however. Internet users now have opportunities to interact with a range of individuals and organisations, and it is argued that such tools reduce societal hierarchies and allow for symmetrical relationships to build. Theoretically democratic politics is a fertile environment for exploring the opportunities potentiated by web 2.0, in particular the notion of interactivity between the campaign (candidate, party and staff) and their audiences (activists, members, supporters and potential voters). Conceptually, web 2.0 encourages co-production of content. This research focuses on the extent to which interactivity is encouraged through the use of web 2.0 tools and platforms across a four year period focusing on four discrete national elections; determining take up and the link to national context as well as assessing lesson learning between nations. Using the Gibson and Ward coding scheme, though adapted to include web 2.0, we operationalise the models of interactivity proposed by McMillan (2002) and Ferber, Foltz and Pugiliese (2007). This methodology allows us to assess whether election campaigns are showing evidence of adopting co-created campaigns based around conversations with visitors to their websites or online presences, or whether websites remain packaged to persuade offering interactivity with site features (hyperlinks, web feeds, search engines) only. Indications are that the French election was largely politics as usual, however the Obama campaign took a clear step towards a more co-produced and interactive model. There may well be a clear Obama effect within the German and UK contests, or parties may adopt the look if not the practice of the US election. This paper will assess the extent to which an interactive model of campaigning is emerging as well as detailing a methodology which can capture and rate the levels and types of interactivity used across the Internet. Whilst specific political cultural and systematic factors will shape the use of Web technologies in each election, we suggest that an era of web 2.0 is gradually replacing that of Web 1.0. Within this era there is some evidence that campaigners learn from previous elections on how best to utilise the technology

    Towards a non-hierarchical campaign? Testing for interactivity as a tool of election campaigning in France, the US, Germany and the UK.

    Get PDF
    Interest in the Internet and its role within political communication and election campaigning has now an established body of theoretical and empirical history, with mixed predictions and findings. The bulk of the empirical research has been in single countries, and where there has been comparative research it has tended to use a range of methodologies conducted by different authors. Largely, empirical studies have agreed with the politics as usual thesis, that political communication online is of a similar if not identical style to offline: top-down, information heavy and designed to persuade rather than consult with voters. The mass take-up of Web 2.0 tools and platforms challenges this approach, however. Internet users now have opportunities to interact with a range of individuals and organisations, and it is argued that such tools reduce societal hierarchies and allow for symmetrical relationships to build. Theoretically democratic politics is a fertile environment for exploring the opportunities potentiated by Web 2.0, in particular the notion of interactivity between the campaign (candidate, party and staff) and their audiences (activists, members, supporters and potential voters). In particular, Web 2.0 conceptually encourages co-production of content. This research focuses on the extent to which interactivity is encouraged through the use of Web 2.0 tools and platforms across a four year period focusing on four discrete national elections; determining take up and the link to national context as well as assessing lesson learning between nations. Using the Gibson and Ward coding scheme, though adapted to include Web 2.0, we operationalise the models of interactivity proposed by McMillan (2002) and Ferber, Foltz and Pugiliese (2007). This methodology allows us to assess whether election campaigns are showing evidence of adopting co-created campaigns based around conversations with visitors to their websites or online presences, or whether websites remain packaged to persuade offering interactivity with site features (hyperlinks, web feeds, search engines) only. Indications are that the French election was largely politics as usual, however the Obama campaign took a clear step towards a more co-produced and interactive model. There may well be a clear Obama effect within the German and UK contests, or parties may adopt the look if not the practice of the US election. This paper will assess the extent to which an interactive model of campaigning is emerging as well as detailing a methodology which can capture and rate the levels and types of interactivity used across the Internet. Whilst specific political cultural and systematic factors will shape the use of Web technologies in each election, we suggest that an era of Web 2.0 is gradually replacing that of Web 1.0. Within this era there is some evidence that campaigners learn from previous elections on how best to utilise the technology

    Broadcasting to the masses or building communities: Polish political parties online communication during the 2011 election

    Get PDF
    The professionalisation of political communication is an evolutionary process (Lilleker & Negrine, 2002), a process that adapts to trends in communication in order to better engage and persuade the public. One of the most dramatic developments in communication has been the move towards social communication via the Internet. It is argued to affect every area of public communication, from commercial advertising and public relations to education (Macnamara, 2010). It is no longer sufficient to have an online presence; we are now in an age of i-branding; with the ‘i’ standing for interactive. Yet, trends in online political electoral campaigning over recent years indicate a shallow adoption of Web 2.0 tools, features and platforms; limited interactivity; and managed co-production. The Internet is now embedded as a campaigning tool however, largely, the technologies are adapted to the norms of political communication rather than technologies impacting upon internal organizational structures, party relationships to members and supporters, or the content and style of their communication. We examine these themes, and develop them through a focus on the targeting and networking strategies of political parties, in more detail in the context of the Polish parliamentary election of 2011. Through a sophisticated content analysis and coding scheme our paper examines the extent to which parties use features that are designed to inform, engage, mobilise or allow interaction, which audiences they seek to communicate with and how these fit communication strategies. Comparing these findings with maps built from webcrawler analysis we build a picture of the strategies of the parties and the extent to which this links to short and long term political goals. This paper firstly develops our rationale for studying party and candidate use of the Internet during elections within the Polish context. Secondly we develop a conceptual framework which contrasts the politics as usual thesis (Margolis & Resnick, 2000) with arguments surrounding the social shaping of technologies (Lievrouw, 2006) and the impact on organisational adoption of communication technologies and post-Obama trends in Internet usage (Lilleker & Jackson, 2011) and posit that, despite the threats from an interactive strategy (Stromer-Galley, 2000) one would be expected within the context of a networked society (Van Dyjk, 2006). Following an overview of our methodology and innovative analysis strategy, we present our data which focuses on three key elements. Firstly we focus on the extent to which party and candidate websites inform, engage, mobilise or permit interaction (Lilleker et al, 2011). Secondly we assess the extent to which websites attract different visitor groups (Lilleker & Jackson, 2011) and build communities (Lilleker & Koc-Michalska, 2012). Thirdly we assess the reach strategies of the websites using Webcrawler technology which analyses the use of hyperlinks and whether parties lock themselves within cyberghettoes (Sunstein, 2007) or attempt to harness the power of the network (Benkler, 2006)

    Analysis of responses to Hefce HEFCE 2007/34, the Research Excellence Framework consultation

    Get PDF

    An Investigation on Text-Based Cross-Language Picture Retrieval Effectiveness through the Analysis of User Queries

    Get PDF
    Purpose: This paper describes a study of the queries generated from a user experiment for cross-language information retrieval (CLIR) from a historic image archive. Italian speaking users generated 618 queries for a set of known-item search tasks. The queries generated by user’s interaction with the system have been analysed and the results used to suggest recommendations for the future development of cross-language retrieval systems for digital image libraries. Methodology: A controlled lab-based user study was carried out using a prototype Italian-English image retrieval system. Participants were asked to carry out searches for 16 images provided to them, a known-item search task. User’s interactions with the system were recorded and queries were analysed manually quantitatively and qualitatively. Findings: Results highlight the diversity in requests for similar visual content and the weaknesses of Machine Translation for query translation. Through the manual translation of queries we show the benefits of using high-quality translation resources. The results show the individual characteristics of user’s whilst performing known-item searches and the overlap obtained between query terms and structured image captions, highlighting the use of user’s search terms for objects within the foreground of an image. Limitations and Implications: This research looks in-depth into one case of interaction and one image repository. Despite this limitation, the discussed results are likely to be valid across other languages and image repository. Value: The growing quantity of digital visual material in digital libraries offers the potential to apply techniques from CLIR to provide cross-language information access services. However, to develop effective systems requires studying user’s search behaviours, particularly in digital image libraries. The value of this paper is in the provision of empirical evidence to support recommendations for effective cross-language image retrieval system design.</p

    The metric tide: report of the independent review of the role of metrics in research assessment and management

    Get PDF
    This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration. This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture. The report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises

    Interactivity is Evil: a critical investigation of Web 2.0

    Get PDF
    Central to Web 2.0 is the requirement for interactive systems to enable the participation of users in production and social interaction. Consequently, in order to critically explore the Web 2.0 phenomenon it is important to explore the relationship of interactivity to social power. This study firstly characterises interactivity in these media using Barry’s (2001) framework differentiating interactivity from disciplining technologies as defined by Foucault. Contrary to Barry’s model though, the analysis goes on to explore how interactivity may indeed function as a disciplining technology within the framework of a neoliberal political economy
    • 

    corecore