research

How explicable are differences between reviews that appear to address a similar research question? A review of reviews of physical activity interventions

Abstract

Background Systematic reviews are promoted as being important to inform decision-making. However, when presented with a set of reviews in a complex area, how easy is it to understand how and why they may differ from one another? Methods An analysis of eight reviews reporting evidence on effectiveness of community interventions to promote physical activity. We assessed review quality and investigated overlap of included studies, citation of relevant reviews, consistency in reporting, and reasons why specific studies may be excluded. Results There were 28 included studies. The majority (n = 22; 79%) were included only in one review. There was little cross-citation between reviews (n = 4/28 possible citations; 14%). Where studies appeared in multiple reviews, results were consistently reported except for complex studies with multiple publications. Review conclusions were similar. For most reviews (n = 6/8; 75%), we could explain why primary data were not included; this was usually due to the scope of the reviews. Most reviews tended to be narrow in focus, making it difficult to gain an understanding of the field as a whole. Conclusions In areas where evaluating impact is known to be difficult, review findings often relate to uncertainty of data and methodologies, rather than providing substantive findings for policy and practice. Systematic ‘maps’ of research can help identify where existing research is robust enough for multiple in-depth syntheses and also show where new reviews are needed. To ensure quality and fidelity, review authors should systematically search for all publications from complex studies. Other relevant reviews should be searched for and cited to facilitate knowledge-building

    Similar works