9 research outputs found

    Speaking the Same Language within the International Organizations; A Proposal for an Enhanced Evaluation Approach to Measure and Compare Success of International Organizations

    Get PDF
    It is currently difficult for Member States to assess and compare the success or performance of UN organizations despite recent movements towards results-based approaches. Efforts in the implementation of logical frameworks have been too independent and uncoordinated and left at the discretion of agencies. This has led to different and deficient implementations of the same theoretical approach making it almost impossible to draw any conclusions. The lack of a common approach is perceptible across agencies in the diversity of evaluation standards and terminology used to describe the same concepts, the unevenness and diversity of staff training as well as in the way intentions and results are presented. The myriad of organizations with some different sort of evaluation role may be seen as an additional symptom of the lack of coordination within the UN system. The establishment of a useful and reliable evaluation process in the UN system requires three main elements: 1- a common and enhanced evaluation framework, 2- the human and organizational capacity to ensure the accurate implementation of the framework, and 3- the commitment of Member States and agencies to implement the approach. This report mainly discusses the common evaluation framework and methodological issues, although it also provides significant insight regarding how to build the human and organizational capacity of the UN to carry out this approach. Assessing the success of an organization entails the determination of three elements: mandate or mission relevance, effectiveness, and efficiency. The report provides insight into these three components of success but its primary focus is on effectiveness. Measuring effectiveness entails establishing precise targets to be reached by agencies and collecting actual results in order to assess if intended targets are being met. Indeed, assessing effectiveness encompasses comparing intentions (provided by targets) to actual achievements (collected through monitoring). The UN Secretariat itself does not provide targets to be met by the organization. Additionally, it over-emphasizes outputs (output implementation rates) and disregards the “big picture” provided by outcomes. Under the proposed approach, subprograms meeting most of their targets are the most effective. Programs (agencies) with a large share of effective subprograms (programs) may be considered effective themselves. As a way to simplify and give an intuitive sense of effectiveness, subprograms could be attributed a category or color following a “traffic light” methodology (green for satisfactory, amber for average, red for below expectations) according to the share of targets satisfactorily met. The same could be done for programs according to their share of satisfactory subprograms. Program and subprogram performance data of every agency could be centralized (by a coordinating body) in a comprehensive webpage that would facilitate comparison between similar functions or themes across the UN system [Please refer to pg. 27 for an elaborate illustration]. The report also suggests the possibility of complementing this objective approach with a perception survey. Despite significant limitations of this type of subjective approach, it is still widely used and gives an idea of which organizations are best regarded by their peers. Contrasting actual performance data and perception indicators could be revealing, and could shed light in areas where the objective methodology may fall short. One of the most important recommendations concerns the organizational capacity ensuring the accurate implementation of the evaluation approach. This capacity should be embodied by a centralizing coordinating body (perhaps under the CEB) that would 1-ensure a common evaluation training and support of UN staff and uniformity of standards (terminology, methods, etc.), 2- centralize performance data gathered from agencies in a common database and present results in a user-friendly manner where programs and agencies could be compared and 3- verify the validity of the data submitted by the agencies (performance auditing)

    A multicenter survey of temporal changes in chemotherapy-induced hair loss in breast cancer patients.

    No full text
    PurposeMany breast cancer patients suffer from chemotherapy-induced hair loss. Accurate information about temporal changes in chemotherapy-induced hair loss is important for supporting patients scheduled to receive chemotherapy, because it helps them to prepare. However, accurate information, on issues such as the frequency of hair loss after chemotherapy, when regrowth starts, the condition of regrown hair, and the frequency of incomplete hair regrowth, is lacking. This study aimed to clarify the long-term temporal changes in chemotherapy-induced hair loss using patient-reported outcomes for chemotherapy-induced hair loss.MethodsWe conducted a multicenter, cross-sectional questionnaire survey. Disease-free patients who had completed adjuvant chemotherapy consisting of anthracycline and/or taxanes for breast cancer within the prior 5 years were enrolled from 47 hospitals and clinics in Japan. Descriptive statistics were obtained in this study. The study is reported according to the STROBE criteria.ResultsThe response rate was 81.5% (1511/1853), yielding 1478 questionnaires. Hair loss occurred in 99.9% of patients. The mean time from chemotherapy until hair loss was 18.0 days. Regrowth of scalp hair occurred in 98% of patients. The mean time from the completion of chemotherapy to the beginning of regrowth was 3.3 months. Two years after chemotherapy completion, the scalp-hair recovery rate was ConclusionsOur survey focused on chemotherapy-induced hair loss in breast cancer patients. We believe these results to be useful for patients scheduled to receive chemotherapy
    corecore