How can we improve the provision of high quality care?     Join the Quality & Safety Community

Expert Panels [ARCHIVED] Integrating M&E for Health Systems Strengthening

When: April 2, 2012 - April 6, 2012 | Where: Virtual, online panel at GHDonline.org Community: Site-wide

Expert Panel moderators

Marie Connelly - GHDonline

This Expert Panel is Archived.

While this Expert Panel is no longer active, we invite you to review and recommend past replies and resources. Membership for this Expert Panel is closed, but we hope you'll join us in one of the many communities on GHDonline.

Panelists of Integrating M&E for Health Systems Strengthening and GHDonline staff

Strengthening local health systems is an implicit goal of many global health initiatives and programs, but ensuring this goal is achieved alongside competing priorities and deliverables can be incredibly challenging, particularly in resource-poor settings.

This GHDonline.org Expert Panel discussion, organized in collaboration with Partners In Health in conjunction with the Program Management Guide (http://www.pih.org/pmg), will address the ways that Monitoring and Evaluation (M&E) can be used to assess whether, and how well, programs are being implemented as planned and achieving their goals and objectives.

Our Expert Panelists will share their thoughts on how to integrate M&E into programs to support health systems strengthening and address the following questions:
     1. How do we balance the need for evaluation (“Did it work?”) with the need for monitoring and using the data to improve programs in real-time to ensure the question, “How did it work?” is also addressed?
     2. How do we define and measure successful and effective M&E in the context of programs focused on care delivery and health system strengthening?
     3. What are the major pitfalls in collecting data for M and for E activities and how can we overcome them?
     4. How can we build internal capacity and ownership for carrying out M&E activities?

Joining us as Expert Panelists for this discussion are:
     * Dr. Pierre Barker, Senior Vice President, Institute for Healthcare Improvement
     * Dr. Paulin Basinga, Lecturer at the National University of Rwanda School of Public Health
     * Dr. Lisa Hirschhorn, Director for Monitoring, Evaluation and Quality at Partners In Health, Senior Clinical Advisor, JSI Research and Training, and Assistant Clinical Professor of Medicine at Harvard Medical School
     * Dr. Wesler Lambert, Director of Monitoring and Evaluation for Zanmi Lasante
     * Dr. Kenny Sherr, Director of Implementation Science for Health Alliance International and Assistant Professor at the Department of Global Health at the University of Washington

We look forward to your questions and comments to the discussion and hope you will share your experiences on these issues.

Please note that this Expert Panel discussion will start on April 2nd, any comments or questions added before then will be addressed by panelists after the discussion begins.

 
A/Prof. Terry HANNAN
Replied at 6:24 PM, 20 Mar 2012

This topic has been at the forefront of recent discussions on GHDonline (see Hamish Fraser) so I am looking forward to how the dedicated Expert Panel evolve this discussion. It fits very clearly with Don Berwick's quote, "to improve care you have to measure it".

Alain Labrique, PhD, MHS, MS
Replied at 8:27 PM, 20 Mar 2012

Agreed, Dr. Hannan. In fact, one must also first define "it" and "work" as well, in the statements above. It is critical to have the e-/m-Health strategies clearly defined to understand what the necessary benchmarks of "success" should be. We could also delve into the distinctions between "research"-- that seeks to establish efficacy, which the emergent, untethered paradigm of mobile health implementations allow by permitting randomization of individuals or clusters to receive, or not, an intervention -- in contrast, perhaps, to "M&E" of deployed systems, sometimes 'tethered' to facilities, and requiring measurement, often without the luxury of data on efficacy, but rather, delving straight into the question of effectiveness. One challenge, then, is understanding where to set the goalposts that define success - without the knowledge of how well an intervention might have worked under ideal conditions of delivery and coverage, where the risk of bias or ecologic fallacy is minimized through randomization and temporally-matched controls.

Some useful reference documents are attached.

Attached resources:

evaezi OKPOKORO
Replied at 5:08 AM, 21 Mar 2012

This is a timely discussion. M & E plan is a necessity and should be institutionalise in all facilities/programs. However, how do you standardize indicators across funders/programs so as to reduce the burden of data collection at the facility level especially in resource limited settings? Is it possible to develop a "one fits all model" in terms of vertical program indicators irrespective of the funders? Looking forward to this meeting.

Sandhya Ahuja
Replied at 5:39 AM, 21 Mar 2012

Its been three years since HMIS reform started under NRHM in India. Data entry and uploading from 642 districts into national webportal is stabilised and happening without break. The attention is now shifted to data quality. I suggest kindly visit http://www.nhsrcindia.org/thematic_data.php?thematic_resources_id=3 and http://www.nrhm-mis.nic.in/. You will find it very useful and if there is any querry, please donot hesitate to contact me.
Dr sandhya Ahuja
Sr Consultant - HMIS - data management & Analysis
NHSRC
India

Clive Shiff
Replied at 8:09 AM, 21 Mar 2012

Certainly for malaria, but with all issues of public health there needs to be a local infrastructure, and advisors should work to encourage Ministries of Health to employ scientific personnel in key positions. To do this there need to be career opporunities available in the Civil Service. Years ago I was employed as such in Rhodesia (now Zimbabwe). Scientists had the same status as Medics in the civil service, and in my lab there were entomologists, biologists, etc (epidemiology had bot been desgnated at that time!) but we need such personnel as well. So I would like to hear what others say, but this concept must firsto f all be accepted. From here we can extend to M&E. Clive Shiff

Dr Shanta Ghatak
Replied at 11:56 AM, 22 Mar 2012

Quality of M and E depends on quality of the data. Absolutely. Capcity building and motivating for truely reliable data is the need of the hour.

Henry Kilonzo
Replied at 12:45 PM, 22 Mar 2012

the motivation in data use could be the use of the data for decision making by those who collect it, and those who provided it (the source), rather than leaving data use as a responsibility of the decision makers.

--

*“If you want to feel rich, just count all the things you have that money
can't buy”***

Jean Pierre Nyemazi
Replied at 2:55 PM, 22 Mar 2012

The institutionalization of feedback mechanism to data provider/collector is a motivating factor and improves substantially data quality

--
Nyemazi

Marie Connelly
Replied at 12:24 PM, 23 Mar 2012

Thank you all for joining us for this Expert Panel discussion. Just as a reminder, the Expert Panel will begin on Monday, April 2nd, but we hope you'll continue to share your thoughts and questions over the next few days - many thanks to those who have already chimed in!

We're very excited to announce that we'll have a fifth panelist, Dr. Pierre Barker of the Institute for Healthcare Improvement (IHI), joining us for this discussion. Dr. Barker's work focuses on improving health systems, specifically around maternal and child health and HIV care. In addition to his work at IHI, Dr. Barker also advises the WHO on health systems strengthening and redesign of HIV care and infant feeding guidelines.

Attached is a paper recently published by Dr. Barker and colleagues in the WHO Bulletin, "Improving public health information: a data quality intervention in KwaZulu-Natal, South Africa".

Thank you all again - we're looking forward to a rich discussion!

Attached resource:

Lisa Hirschhorn
Replied at 12:52 PM, 23 Mar 2012

Greetings to all-we are delighted with the excitement this has already generated. One other addition, my affiliation was incomplete and I am also Senior Clinical Advisor HIV/AIDS at JSI Research and Training and welcome my collegues

Yatin Dholakia
Replied at 7:15 AM, 24 Mar 2012

Any meaningful evaluation of health systems functioning involves truthfulness and correctness in the data collation.

Target oriented approach adopted by many program managers in implementing programs and performance assessments based on whether the same are achieved or not leads to manipulation of data at the field level and affects the monitoring and evaluation process.

Thus in addition to motivation and capacity building of the health staff, health managers need to be oriented to the adverse effect of laying undue emphasis on target achievement and its use as a staff performance evaluation tool.

Dr. Yatin Dholakia
Hon. Technical Adviser
The Maharashtra State Anti TB Association
Mumbai, India

Sandhya Ahuja
Replied at 7:56 AM, 24 Mar 2012

Very important that
1.the data reported matches with the data recorded and the data are recorded .Sometime the data required in not present in the register of any health facility.
2. If the data is collected from different facilities then where and how do they get consilidated ( there is every chance of manipulation of data at this level) and are the data from all facilities are collected on a specified timeframe and if all the data are being reported.

Thomas Moulding
Replied at 1:35 PM, 24 Mar 2012

Thomas Moulding M.D.

My contribution to this topic will be limited to the issue of preventing drug resistance to TB drugs by monitoring the medication ingestion of TB patients and selecting poorly adherent patients who require Directly Observed Therapy. For those who are interested in this topic my thoughts are covered in the attached article.

I also have thoughts regarding how this information can be transmitted by mobile phones to the patient’s clinic and national database relatively inexpensively, which I will e-mail to anyone who is interested.

Attached resource:

Stephanie M. Topp, MPH, MPhil (Oxon)
Replied at 2:57 PM, 24 Mar 2012

Jean Pierre Nyemazi's comment is a good reminder that M&E, and data collection more generally, cannot be cordoned off from the rest of the health system. Institutionalistion of good data collection practices for example, suggests more than the introduction of efficient mechanisms (ie mechanistic processes) but also behavioural and managerial changes, necessarily occurring within multiple domains. This requires us to conceptualise the system holistically from the start.

Some good resources for this kind of 'systems' thinking, with a view to capturing the complexity of monitoring (and strengthening) health systems is outlined in the attached resource by Don de Savigny and colleagues.

Attached resource:

K. Rivet Amico, PhD
Replied at 6:24 AM, 26 Mar 2012

These resources are great. Thank you!

I am thrilled that this critical topic is going to be covered by such an expert team of panelists. I am particularly interested in opinions about how monitoring and evaluation can be used on the small scale- individual clinics versus networks of clinics. Many areas are now dominated evidence based practice requirements but whether or not a strategy identified as effective in a given research project or as effective overall is necessarily a benefit in a given practice is less well understood or evaluated. Moreover, I look forward to hearing about the use of monitoring and evaluation applied to building evidence for strategies developed in practice (a bottom up approach). Disseminating the process of research versus a particular outcome of a given research study (e.g., a discrete intervention approach) is very exciting and in my view a critical missing-piece in many areas. I look forward to this discussion!

Henry Kilonzo
Replied at 10:37 AM, 26 Mar 2012

The under-staffing in most health facilities especially in developing countries is a major contributor to compromised/manipulated data. This is because, health workers are overwhelmed with serving patients and recording/clerking on daily basis. Most of the times to the health worker, the patient is the priority, and data recording may be an after thought. Data may be recorded because of upward accountability (because they are required to do it), but not as a motivation to share and use quality data. Hence addressing the workload at facility level, may contribute to data quality.

--

*“If you want to feel rich, just count all the things you have that money can't buy”***

Pierre Barker
Replied at 10:52 AM, 26 Mar 2012

Understaffing is here to stay in low and middle income countries, so all the more reason to adopt a systems approach. The paper we posted (see Mphatswe) shows that a core issue for improved quality of data is to increase the value of the data being collected. By feeding back the data to frontlines on a continual basis, and working with data aggregators and reporters to show the importance of their work, it's possible to make major improvements in data completeness, accuracy and timeliness without having to pull in more resources. There are many instances of course where resources are an absolute bottleneck that you can't overcome with system improvement.

John Gamba B
Replied at 1:14 PM, 27 Mar 2012

Wonderful Articles & Data Thanks for Share It Very useful for rethinking our Health Systems Something about H care Models/ Insurance efficiency ? This point is very important for all systems but very difficult to measure Regards

Usman Raza
Replied at 2:08 PM, 27 Mar 2012

I am interested in hearing ideas about how the cost of collecting data can be reduced without compromising too much on detail and quality. I believe under-staffing is very much a hurdle, and also believe that feeding information to those who collect data won't help much unless they can link it clearly to their own efforts. And on top of all that, is probably the incentive design applied to the workforce.

Sarah Gimbel
Replied at 2:13 PM, 27 Mar 2012

Collect less data. We already don’t use the data we have. Less is definitely more--for quality and cost.

Sarah

Ron Hebert
Replied at 5:15 PM, 27 Mar 2012

Further to Usman Raza's comment "about how the cost of collecting data can be reduced without compromising too much on detail and quality", I can comment from 14 years of first-hand experience in working with health IT in developing countries. The answer is to collect the data - ONCE - in eformat at the point of care, edit it against the established patient database, and other relevant databases, then distribute that data electronically to those in positions of need to receive. Getting rid of the manual paper-based forms costs MUCH less, and presents the opportunity to enhance quality exponentially. The manual paper-based forms are the major impediment to progress in eHealth in all of the developing countries.

Marie Connelly
Replied at 6:08 PM, 27 Mar 2012

Partners In Health's Program Management Guide is another relevant resource for our upcoming discussion. Please find links below to both the full guide, as well as the unit focused specifically on M&E.

Attached resources:

Usman Raza
Replied at 12:14 PM, 28 Mar 2012

Agree with Ron that once established, the cost of electronic data collection systems is lower than hard forms. However, reaching that stage would entail major changes in the current systems that have been in place for long. In my experience, the behavioural change and acceptance by the data collectors and users, is a much bigger issue than the cost of physical infrastructure required for electronic systems. Yet, some technologies (like cellphones) continue to penetrate even the remotest communities and people happily adapting to these.

Ron Hebert
Replied at 2:20 PM, 28 Mar 2012

Usman Raza makes a good point about behavioral change, and acceptance of change, from the users of manual paper-based systems to eHealth, which is valid concern and which applies everywhere. However, the change must be made to enable the efficiencies that all of the developed countries started to achieve in the early 1970s, and which a few developing countries have undertaken in the past 10 years or so. Jamaica - a mid-level developing country - made the change to a computerized PAS (Patient Administration System) in 1999, and did a study that proved the efficiencies of the PAS. All Commonwealth countries have had the same manual paper-based systems since the 1850s, so I say why delay the inevitable?

Gultineh Kebede
Replied at 2:38 PM, 28 Mar 2012

It’s very interesting point. Electronic medical records (EMR) are being tried in a number of developing countries. I hear some problems from colleagues in the developing countries that are implementing EMR system. I would like to hear experience in different countries in terms of overcoming some of the challenges such as:
• Infrastructure such as internet connection, electricity availability, etc
• Data transfer to higher level (from health facilities to central level) is a problem
• Antivirus updates,
• Trained manpower turn-over at facility level, which make it expensive.

Christopher Spitters
Replied at 4:26 PM, 28 Mar 2012

With regard to Sarah Gimbel's thoughtful contribution, please also consider
the following quotes from DA Henderson:

³Real People, Real Solutions
A public health preoccupation today seems to be the creation of ever-more
elaborate technologies that harvest hitherto unimaginable quantities of
data. ... It seems to me that it would be far more effective to invest in
the training and support of such professional staff. They are what we need
to develop real solutions.²
(Full editorial available for download at JHPH Magazine, Special Edition
2012, p. 11; http://magazine.jhsph.edu/2012/technology/index.html)

And regarding M&E with respect to the smallpox eradication program...
³A word of caution, however, should be said about goals. We endeavored to
keep the number to not more than five operational ones. Obviously, there
were hundreds of possible measurements of progress that
could have been requested and compiled. Our experience, however, was that
when the number got beyond
four or five, key staff became so involved in submitting and compiling data
that few used the data for the purpose for which it was intended‹in
monitoring the strengths and weaknesses in program implementation.²
Henderson DA. The challenge of eradiction: lessons from past eradication
campaigns. INT J TUBERC LUNG DIS 1998; 2(9):S4­S8.

Respectfully,
Chris Spitters

Yahya Ipuge
Replied at 11:02 PM, 28 Mar 2012

In countries like Tanzania with a very extensive primary health care system, the use of a dual (manual and EMR) systems is a reasobanle approach in the immediate future. paper based systems are used to collect patient information and service delivery data in health facilities. A summary form is used to send reports to the district level where a computerized system based on the DIstrict Health Information System Software (DHIS) is used collect, store and analyse data. Use of computers have increased data quality, use and reporting in three regions and 27 sentinel panel of districts (SPD) where DHIS has been introduced at district level and the MOHSW is now rolling this out to the remaining 18 regions.

In the three regions and SPD distrcts, data from paper based summary forms is entered into DHIS at the district level. Some districts use an online system that means that the regional and national level can see and access the data immediately once it is captured. In districts that have unreliable power and slow internet data are transfered to the regiona and national level by export files.
At the same time mHealth is being piloted where mobile phones are used for reporting some selected information and data. This information will also be captured by DHIS district level.
The main challenged faced are similar to those raised by Gultineh Kebede:
*lack of staff dedictaed to HMIS and MDE at district and regional level. Currently, there are HMIS focal persons that have other duties as well. The government has introduced the positions of district and regional M&E officers but these are yet to be filled due to varios reasons including lack of qualified candidates or budget or just will to change)
*Infrastructure such as internet connection, electricity availability. Use of internet modems issued by mobile phone companies have increased the access to internet,
*Data transfer to higher level (from health facilities to central level) is a problem. This has been addressed by use of internet modems and an online data entry system.
*Antivirus updates are solved by use of Linus operating system instead of windows. Also setting automatic uptade of antivirus definitions.
*computer maintenance is still a challenge due to lack of expertise in rural districts and lack of funds to contract private firms


Ipuge
----------------------------------
Sent from my Nokia E7 smartphone

Janvier MUNGARULIRE
Replied at 2:51 AM, 29 Mar 2012

This is similar to our country (RWANDA) data related to HIV are captured at district level direct to web based software but we need to strengthen the capacity of data collectors for the analysis before entering them into database. The data quality audit is also required for formative supervision and improving data collection

Payel Gupta
Replied at 6:21 AM, 29 Mar 2012

Has anyone ever used scanners to get data --- at least in ER settings where the paperwork is not as much as in the inpatient setting. Currently we are sending patients home with ER paperwork at discharge - which means that we do not have any record of what was actually done and what needs to be improved. Any thoughts?

We are meeting some resistance but hoping that once we have a working computer and scanner we can do a trial and the staff may be amenable .... later we would send via dropbox to interns in the US who can do data collection if needed.... any thoughts???

Bradley Dreifuss
Replied at 9:52 PM, 29 Mar 2012

Re: Payel Gupta's inquiry about data collection in the rural Emergency Department setting of resource-limited countries.

Your situation is typical and what we have experienced in Rural Uganda.

Global Emergency Care Collaborative www.globalemergencycare.org (website is being updated, but still functional...also check out the promo video on the website) is currently switching from MS Excel to an Access Database where the paper charts are used and at the time of Emergency Department discharge (to home, or inpatient) the info from the paper chart is inputted into the Quality Improvement database AND scanned with a desktop scanner for archiving (locally) and future reference, if data in the database is unclear/incomplete... This is also helpful for quality improvement efforts, as charts otherwise disappear and there are no other medical records to reference when working to improve quality of care.

We care currently working on building a secure server situation (in the USA) to transfer/store the data. This model will enable expansion to additional ED sites and collection of data all day without access to the internet by syncing it when we do the data backup once per day. Bandwidth is an issue in SSA, especially in the rural areas, as cell coverage is hit or miss and seems to vary depending on time of day and weather.

We recognize that this may not be the ideal situation for the long term, but given the electricity/tech limitations in a Rural Ugandan District Hospital, we are also trying to ensure sustainability as well as data integrity and security. In the long term, we are hoping to train our Emergency Care Practitioners and Ugandan support staff to perform computer data entry and scan the charts, but we have not found that to be reliable...yet. We have a ways to go with computer literacy before that happens reliably.

Any suggestions are also welcomed by GECC....

Brad Dreifuss, MD
Global Emergency Care Collaborative
Member of the Board of Directors
Research Co-Director
http://globalemergencycare.org
http://vimeo.com/17141360

Payel Gupta
Replied at 5:11 AM, 30 Mar 2012

Thank you Dr. Dreifuss --- GECC looks like an impressive organization! Your comments are very helpful -- and I think for archiving purposes scanning seems to be a good idea (also environmentally friendly!) -- again, thank you for your comments/ suggestions.

Lisa Hirschhorn
Replied at 7:32 AM, 30 Mar 2012

Also to respond to Sarah Gimbel's excellent point-and to quote a colleague from Partners in Health " Use it or Lose it". Often we collect data beyond what is required, do no use what we have and all at the cost of data quality and hence utility of data. There has been some innovative approaches to decrease the volume of monitoring data while still providing some insights into where to focus limited resources for program improvement. One example has been LQAS (Lot Quality Assurance sampling) which has been used to target where more in-depth monitoring or evaluation may be needed. Some examples (of many) include work in Uganda (http://uphold.jsi.com/Docs/Resources/Conferences/using_lqas_roll_back_malaria...) and health and nutrition using Large Country-Lot Quality Assurance Sampling: (http://siteresources.worldbank.org/HEALTHNUTRITIONANDPOPULATION/Resources/281...) . (both references are free online). We are also using this approach to look at data quality at the facility and CHW levels to help make the data we collect more useful for program evaluation and rapid improvement

Sandhya Ahuja
Replied at 9:30 AM, 30 Mar 2012

It is very important that we first decide which data are to be collected. Selection of data should be such that it should contribute to atleast 2 indicators. Also, we need to decide the number of indicators needed at different level. At block level ( in Indian Context) the number of indicators required is more than at district level and decreasing as we move upwards like state and at national level. The frequency of data collected is also vey important. Some data ( like service data) may be needed monthly, other data like trainnig of ANMs. doctors and other staff on different skills may be collected quartely and HR / infrastructure status can be collected annually. Some data , like eligible couples in any area, SCs/STs population etc are best done through annual survey.

Tess Panizales, MSN, RN
Replied at 9:50 AM, 30 Mar 2012

I truly agree with your points. We should think through the What's and Why's and complement with Who, Where, When and How when we create the infrastructure of the plan to conduct M and E. This will be more beneficial and with limitations in resources, structural efficiency is very important at the first stage. I always remember the code KISS - keeping it simple (yet meaningful enough to use in driving a program forward).

Integrated participation is always vital for the success of any effort - that includes the local community who are stakeholders in their health care delivery system.

Paulin Basinga
Replied at 7:19 PM, 30 Mar 2012

Thanks you all for these great contribution to the debate. I look forward to join the panel next week and learn from you all. Paulin

Balsama Andriantseheno
Replied at 9:13 AM, 2 Apr 2012

I would like to continue the discussion on how much data to collect for M and for E! AS Lisa Hirschhorn states, LQAS might be an answer to determine where do we need more in-depth monitoring and/or evaluation! But the opposite also needs to be verified: is there already a method to help us identify where do we need less monitoring and/or evaluation? The case happens very often in developing countries where ALL possible existing data are collected but they are not ALL neither used nor analyzed and even less for decision making!

Lori DiPrete Brown
Replied at 10:02 AM, 2 Apr 2012

I have observed that donors are requiring more data collection (utilization and outputs, if not outcomes) from programs they fund. However, the ability to use this data for program improvement is limited. One key focus for improvement could be building skills in storing and analyzing data that you have, as well as using it to make decisions. Agree that LQAS has potential... both because the small random samples can be used at the local/unitl level as a screen for more in depth study, BUT ALSO AND IMPORTANTLY, they can be pooled and weighted to create a robust random sample for larger regions/national data.

Lisa Hirschhorn
Replied at 11:17 AM, 2 Apr 2012

In the week leading up to the official start of the Expert Panel discussion today, there have already been a number of valuable posts and interchange of ideas and resources. Therefore , I wanted to highlight some themes which have emerged from the discussion so far. I hope over the next five days these will continue as areas for further discussion and sharing of potential solutions as new themes also emerge. Some which emerged as we reviewed the posts included:

1.The need to ensure data quality and feedback loops within a system to facilitate this goal

Several participants have commented on the need for ensuring that data are high quality and that ensuring use can help achieve that goal. These posts highlighted the need for a robust feedback loop which includes the data provider, “a motivating factor and improves substantially data quality” and a focus on measuring and ensuring data quality. Capacity building for data collection has also been discussed. Dr Barker also posted a relevant article published with in the WHO Bulletin, "Improving public health information: a data quality intervention in KwaZulu-Natal, South Africa" which emphasized that the “core issue for improved quality of data is to increase the value of the data being collected”.

The need for a system to ensure data quality was also highlighted. “Institutionalistion of good data collection practices for example, suggests more than the introduction of efficient mechanisms (ie mechanistic processes) but also behavioral and managerial changes, necessarily occurring within multiple domains. This requires us to conceptualize the system holistically from the start.” Challenges which were mentioned included workloads due to staffing patterns, explicit or implicit pressures to manipulate the data, cost of collecting and lack of harmonization of data demands and data sources.

2.How to keep the amount of data collected and analyzed as part of M and E to a feasible but effective amount.

This theme was discussed by a number of participants as an ongoing challenge. One discussant reflected on lessons learned from the small pox eradication efforts, referencing DA Henderson: “A word of caution, however, should be said about goals. We endeavored to keep the number to not more than five operational ones. Obviously, there were hundreds of possible measurements of progress that could have been requested and compiled. Our experience, however, was that when the number got beyond four or five, key staff became so involved in submitting and compiling data that few used the data for the purpose for which it was intended in monitoring the strengths and weaknesses in program implementation”. Innovations which were mentioned in pasts including use of LQAS were also noted, as well as the older adage “KISS” (Keep It Simple) and reminders of the importance of engaging the community.

3.The importance of using data to improve care and services

The need to ensure that data collected are also used "to improve care, you have to measure it". However one respondent cautioned about potential pitfalls especially for target-oriented approaches: “Target oriented approach adopted by many program managers in implementing programs and performance assessments based on whether the same are achieved or not leads to manipulation of data at the field level and affects the monitoring and evaluation process.”

4.What is the role and challenges of eHealth and other electronic data capture initiatives to reduce the cost and improve the quality and use of M and E data

Discussions included the use of paper based systems to feed summary data into an HMIS system in Tanzania to other models which only utilize electronic approaches and some concerns about infrastructure needed to accomplish this goal. Other interesting approaches for capture in more emergency-based settings including use of scanners. The potential role of M and E to identify new and promising practices and models developed in the field was also discussed, and we hope participants while share examples of the use of monitoring and evaluation being “applied to building evidence for strategies developed in practice.”


We look forward to hearing your thoughts on these important topics and well as new issues which are important or arise from other postings during the Expert Panel this week. Please continue sharing your experiences and questions and possible or implemented solutions.

Lisa Hirschhorn
Replied at 11:30 AM, 2 Apr 2012

Here is a brief summary of some of our thoughts as a panel on the issues raised in the first question of this discussion. I hope members and my fellow panelists will continue to offer suggestions and ideas on this topic:

1. How do we balance the need for evaluation (“Did it work?”) with the need for monitoring and using the data to improve programs in real-time to ensure the question, “How did it work?” is also addressed?

This raises the questions already articulated by discussants over the need to limit what we measure to ensure that the data are quality and used versus the detail needed to evaluate the impact of the program. We have been viewing these as related but not 100% overlapping activities and start by thinking strategically what does a program manager need weekly/monthly/quarterly to run and improve and what can be used and what is needed in addition to measure if the longer terms aims have been achieved. Critical here is a model of integrated monitoring and QI. We also discuss what level of ‘proof” do we want, recognizing that we can not afford to do probability or even plausibility, for every activity, but do need some measure if we did what we said we were going to do and if goals were met. Then mapping out the data needed, estimating resources leads to a realistic plan in terms of what we can actually do effectively and support the programs implementation.

Capturing the "how did it work” requires more of a qualitative approach as well and we are interested in participants experience in doing this without “drowning in the data.”

Kenny Sherr
Replied at 12:00 PM, 2 Apr 2012

Along with Lisa's response, I wanted to add a couple of thoughts on the first question for the panel:

1. On balancing evaluation and monitoring.

This is a perennial question for those implementing health programs (either in the domain of HSS or more widely). In my experience I’ve seen much more of an emphasis on the ‘M’ rather than the ‘E’, though I have noted a shift towards (or at least calls for) putting in place more rigorous evaluations. There are a number of reasons for the emphasis on program monitoring that I’d like to note. One is that the data are generally more available through routine information systems, which better fits with the program monitoring approach, at less cost in a more timely fashion. Second, many organizations are working on short project timelines with limited funding for costly evaluations or time to collect, analyze, and present data on appropriate outcomes that stretch well beyond the project horizons. A third reason is that rigorous evaluations are difficult to do, with numerous methodological challenges that may lead to flat results (an anathema to donors and implementing agencies), or for which there may be limited capacity to conduct. Finally, many funding initiatives do not require rigorous evaluations and instead put much more emphasis on the monitoring function. As a result, in-house capacity is built for monitoring and not evaluation, and the imbalance perpetuates.

Because so much of institutional structure and project design is driven by funding, striking a balance between monitoring and evaluation requires a structural response. If donors and governments required a strong evaluation component built into the initial design of activities they fund, if they set aside sufficient funds for evaluation, and if projects were funded over a sufficient timeline to enable the interventions to have the impact on outcomes measured as part of the evaluation approach, then there would be a better balance between monitoring and evaluation. If we follow the current structure, the focus will continue to be on collecting, collating and sending up routine program data (from largely disease specific areas), and institutional capacity will be built in this area (at the expense of capacity to evaluate impact).

Because this discussion focuses on M&E for health systems strengthening, I think it’s also worth highlighting the tension between programs and projects. Funding channeled through Ministries of Health focus more on building and sustaining programs that are often national in scope, while funding channeled through NGOs often focus on projects. There is a need for M&E independent of the project/program divide; however, the content and approach for M&E will differ depending on the aims of the program or project. Why is this important? Mostly because the opportunity for striking a balance between M&E are different for program vs. project mentalities. Projects are often set up and dismantled around specific funding opportunities, and I’d say more often than not projects have their own sub-structures (which means dedicated leadership, program and M&E staff). Programs share technical assets independent of funding source. The efficiencies gained through a program approach would also have an impact on the M&E balance.

Finally, when thinking about M&E (particularly the ‘E’) for HSS, it is worthwhile to consider that what we measure will have an impact on the activities we carry out. So, if we’re assessing impact on health outcomes, the type of outcomes we measure will have an impact on the HSS approach (which is often what we want to avoid). If we’re interested in evaluating the impact on child survival, then the HSS interventions naturally start to focus on supply chains, HR, HIS, etc that are related to interventions likely to have an impact on child survival. If the focus of the evaluation will be on HIV/AIDS-related mortality, then the HSS interventions will start to naturally shift towards supply chain systems, data collection systems, etc in related areas.

Paulin Basinga
Replied at 12:42 PM, 2 Apr 2012

I’m Paulin Basinga, an impact evaluation and operational research practitioner. I’m now with the Gates Foundation in the Efficiency and Effectiveness unit of the HIV division. In the past 10 years I have been teaching at the National University of Rwanda School of Public Health. The views that I will be providing as a panel expert are my own views and do not necessarily reflect the views of the institutions I’m affiliated with.
I’m most than happy to join this panel this week and look forward for a fruitful discussion.
In terms of this first question, I think it’s also important to note that not all program will need an impact evaluation “did it work?: what was the impact of the program on a particular outcome of interest?”
But all the programs implemented will need an M&E plan to track what is happening whining a program and use the real-time data to inform program implementation and day to day management.
Also when an impact evaluation of a new or existing program is planned, a detailed M&E will be nested into the impact evaluation plan to inform on how the program is being implemented. The interpretation of the impact evaluation will rely on the M&E data.
A rigorous impact evaluation will is very costly, require time and expertise. It’s important to decide when it’s appropriate to implement an impact evaluation.
Paul Gertler and colleagues in a book untitled “Impact evaluation in Practice” (see reference below) propose keys questions to consider before undertaking an impact evaluation:
1. What are the stakes of the program being implemented? The answer to that question will depend on both the budget that is involved and the number of people who are, or will eventually be affected by the program. So if a program does not require a very large budget not serve many people it’s not worth implementing a rigorous impact evaluation, a regular monitoring of the program will be sufficient.
2. If the stake of the program is high, the second question will be weather any evidence exists to show that the program works in similar circumstances. If no evidence exists, it’s worth considering an impact evaluation.
3. But if the evidence exists, an impact evaluation may be justified only if it can address an important and new policy question.
The authors list keys characteristics of a program that should be evaluated: innovative, replicable, strategically relevant, untested and influential.
Another user of impact evaluation is when one is interested in the effectiveness of an intervention at the population level compare to no program. A good example of this are the impact evaluations of Voluntary Medical Male Circumcision (VMMC) implemented in Uganda, Kenya and South African late last decade. Those evaluations have provided evidence on the impact of MC in reducing the transmission of HIV, and now MC is being promoted / scaled up in many high HIV prevalence countries in Africa. No further impact evaluation is required for male circumcision but instead good monitoring to make sure that the programs are being implemented as planned.

Reference: Paul Gertler and colleagues entitled “Impact evaluation in Practice”, the book is available in Amazon and as interactive textbook at http://siteresources.worldbank.org/EXTHDOFFICE/Resources/54857261295455628620...

Marco Gomes
Replied at 2:38 PM, 2 Apr 2012

We at the Centre for Health Policy and Innovation adopted an approach in which we believe that health systems are a means, developed by societies, to assist achieve ends such as those mentioned by various other respondents. Health Systems can be a vehicle for accelerating progress on health-related goals, but they can also be a source of constraints, impeding progress. Health system performance can be thought of as the results produced by health systems - the ends societies seek to achieve.

A stream of work relevant to health systems analysis places more emphasis on performance measurement and assessment and the use of statistical models to associate health system characteristics with performance. In its 2000 World Health Report, “Health Systems: Improving Performance” (WHO 2000), WHO proposed a conceptual framework with three broad health system objectives: health status, responsiveness to people’s nonmedical expectations, and fair financial contribution. It also developed specific metrics to measure country performance on these outcomes and ranked countries individually on their relative performance for each outcome and calculated a composite index of all the outcomes. The report also addressed four major health system functions—stewardship, resource creation, service delivery, and financing—and critically reviewed available evidence regarding policies in these areas and their implications for health system performance.

In a subsequent more detailed report, Health Systems Performance Assessment: Debates, Methods and Empiricism (WHO 2003), WHO elaborated this approach to health systems performance assessment by detailing methods for quantifying the inputs used in health systems; using specific indicators for assessing performance for the four functions of health systems; applying metrics for quantifying the three goals of health systems (health, responsiveness, and fairness in financial contribution); and detailing ways of deriving aggregate measures of health system performance. WHO also attempted to establish causality between policy interventions and the resulting outcomes in the area of health financing.

Best regards,

Marco Gomes

Pierre Barker
Replied at 3:36 PM, 2 Apr 2012

Lisa Hirschhorn makes a great set of points - the core issue here is ....let's be clear about the difference between monitoring and evaluation, and understand whether a primary intention of the project is to improve data systems

Unless we are primarily setting out to improve a data system (e.g. introducing a new technology), or we are in complete control of the data systems associated with the project, most of us are trying to grapple with the question... "can we use the current real life data system where our project is located to accurately monitor the project?" Unless you are using a completely parallel data collection system, most health system implementation projects will need a reliable local data system to be able to monitor progress, and often much attention needs to be directed to monitoring and evaluating the local data systems themselves before they can be used to monitor the effects of the project intervention. Improving data systems can and should be part of the intervention to strengthen the health system.

For evaluation, we can use a combination of trustworthy local data systems, as well as external data collection systems to answer Lisa's question "did it work".

Either way - investment in local data systems is crucial. Typically, unless you are specifically testing a new, scalable data system, projects will not be in a position (and should avoid the temptation) to introduce new or parallel data systems, since new systems will likely not be compatible with the rest of the health system's data structure, and will be difficult to sustain after the life of the project. Also, there will be a missed opportunity to strengthen the existing system.

While exciting innovations are being tested for creative uses of existing technologies (e.g. cell phones), introduction of new technologies is risky unless you can show that the system is scalable within the resource and technology constraints and competencies of the environment . Much can be learned from the few examples (e.g. Baobab system in Malawi http://baobabhealth.org/) where very simple, locally maintainable systems have been deployed in low resource settings.

Pierre Barker

Balsama Andriantseheno
Replied at 3:48 PM, 2 Apr 2012

Dear panelists,
Thanks for these first thoughts on this first question!
When we talk about 
balancing the need for evaluation (“Did it work?”) with the need for monitoring we should also not forget that an evaluation should come at the right time when it is needed! If it's during the course of the program/project, it should come in at a time when managers need to know if the program/project is effectively going the right direction or not, if the expected results can be foreseen in the horizon and if not, an evaluation should tell them what to correct or adjust! So balancing the need for evaluation with the need for monitoring (despite the data availability discussions) involves also a good implementation programming from start as managers should know what could be the main data they will need for an upcoming evaluation so that their monitoring plan should prepare them for it! And when it's at the end of a program/project then it's more clear for managers on what they needed to have monitored! And I don't want to get into the data criteria
yet!

Bali ANDRIANTSEHENOM&E SpecialistAfrican Strategies for Health (ASH)

Paulin Basinga
Replied at 11:23 AM, 3 Apr 2012

Following Balsama Andriantseheno’s comment, I couldn’t agree more how it’s important to think about the Evaluation of the program at the planning stage. Monitoring of any program would have to be planned prior to the program implementation with well-defined indicators for all inputs going into the program, the processes, outputs of the program and outcomes. The monitoring will be a continuous process that will tracks what is happening within a program, provide real-time data and as Balsama said, the results will be used to inform program implementation and day to day management and decision.
The Evaluation component will be implemented periodically to answer specific questions related to the program implementation and results. The evaluation will be periodic (starting with a baseline before the program starts for example), should use an objective approach to respond to well define questions.
And impact evaluation is one form of evaluation which will seek to answer the question related to “does this particular program cause the change in outcomes?”. The most important challenge for implementing a rigorous impact evaluation will be to define a situation which will allow a comparison “what would have happened is the program was not implemented” , thus the importance of defining the methods to be used if one want to go for the impact evaluation.
Most of time evaluation results, even though not rigorous will be very valuable for the program.
But Impact Evaluation, when well implemented provides strong and credible evidence that a particular program works. This is very important when a Minister of Health is negotiating a budget with the minister of finance (fiscal space negotiation) or other donors to ensure the sustainability of a program.
The results also inform others who are interested in implementing the same program….

Lorna McPherson
Replied at 12:10 PM, 3 Apr 2012

Firstly, I approach M & E like a scientific experiment. If I am testing the effect of say 'x' on the growth of 'y', BEFORE starting the feeding process, in addition to having all f my inputs in place and knowing all of the outputs, i must also know what data I am collecting and how often.  How often do I collect food consumption data?  How long do I need to collect the data? How often do I collect weight data?  I also need to develop my tables and decide what analyses I will do.  Data collection is afterall related to the objective of the exercise.

Secondly, all of the data does not have to be analysed and reported on by us since it depends on what we want to find out.  For example although I agree that parents should have all of their children's data on their MCH  cards and that data should be analysed and discussed with them, might it not be equally useful to periodically just draw and analyse a random sample to note trends?  Also, does the Health Visitor at the Health Center have to do it herself? Might it not be worth having a special person (who like myself) love analysing data to focus on this for a group of health centers or a district?  Just a few thoughts.

Lisa Hirschhorn
Replied at 12:44 PM, 3 Apr 2012

SUMMARY FROM DAY 1 (and apologies for tardiness!) and some questions for discussion members
Some themes arose from day 1:
1. The challenge of balancing Monitoring and Evaluation
Discussants agreed that monitoring and particularly evaluation were important in HSS, but also raised important points about the relative data burden and costs of evaluation versus monitoring. As Kenny Sherr wrote “data are generally more available through routine information systems, which better fits with the program monitoring approach, at less cost in a more timely fashion… many organizations are working on short project timelines with limited funding for costly evaluations or time to collect, analyze, and present data on appropriate outcomes that stretch well beyond the project horizons- (and) rigorous evaluations are difficult to do,”. An important distinction was raised about the differences between programs (as integrated through MOH and national in scope and longer term) versus projects which were more NGO-based and shorter term, and how that can drive the M and E balance and resource needed. Programs may have less ability to individualize the M and E, but more efficiencies due to scale and integration into a national structure. The budgets and timeline for projects often dictate a greater emphasis on monitoring rather than more rigorous evaluation (and particularly impact evaluation).
The importance that funders also recognize the cost of required (or valuable even if not required!) evaluation was also raised, as the resources needed to plan and implement are generally greater than routine shorter term monitoring.
Echoing comments from the pre-panel postings, again a warning about how evaluation and measurement may impact focus on where the HSS occurs. “if we’re assessing impact on health outcomes, the type of outcomes we measure will have an impact on the HSS approach (which is often what we want to avoid)”.
2. The role of monitoring and the role of evaluation
Many discussants agreed that there was a role for both, but not all activities warranted the level of in-depth data analysis required for extensive (and particularly impact) evaluation. Dr Basinga wrote “I think it’s also important to note that not all program will need an impact evaluation. A rigorous impact evaluation will is very costly, require time and expertise. It’s important to decide when it’s appropriate to implement an impact evaluation. What was the impact of the program on a particular outcome of interest?” and provided an excellent resource for further information and guidance on who to reach this decision. Factors included how high were the “stakes” of the program, keys characteristics of a program (including if it is innovative, replicable, strategically relevant, untested and influential) and need to understand population impact. Another discussant highlighted the need to udnertsand Why we do the M and the E and the timing. “So balancing the need for evaluation with the need for monitoring (despite the data availability discussions) involves also a good implementation programming from start as managers should know what could be the main data they will need for an upcoming evaluation so that their monitoring plan should prepare them for it! And when it's at the end of a program/project then it's more clear for managers on what they needed to have monitored!
3. How can we measure more with less?
Dr Barker focused on how we can use existing data and the relative emphasis on introducing new data systems versus utilizing what is available. “Unless we are primarily setting out to improve a data system (e.g. introducing a new technology), or we are in complete control of the data systems associated with the project, most of us are trying to grapple with the question... "can we use the current real life data system where our project is located to accurately monitor the project?"….. most health system implementation projects will need a reliable local data system to be able to monitor progress, and often much attention needs to be directed to monitoring and evaluating the local data systems themselves before they can be used to monitor the effects of the project intervention. Improving data systems can and should be part of the intervention to strengthen the health system.

For evaluation, we can use a combination of trustworthy local data systems, as well as external data collection systems to answer Lisa's question "did it work".
4. What do we measure in HSS?
While M and E OF HSS was not an explicit goal of this discussion, it remains important to keep in mind as we discuss the role of M and E in strengthening these systems. One discussant described health systems as a “vehicle for accelerating progress on health-related goals, but they can also be a source of constraints, impeding progress” therefore choosing what to measure (performance) should be focused on measuring “the results produced by health systems - the ends societies seek to achieve.” Work by the WHO on health systems analysis “places more emphasis on performance measurement and assessment and the use of statistical models to associate health system characteristics with performance”. Areas of focus included stewardship, resource creation, service delivery, and financing
Focusing on these issues, I would ask discussants to share experiences related to these including:
1. How they have with “doing more with what we have” in terms of strengthening data systems to use for monitoring and contribute to evaluation
2. How they have negotiated the balance of the M and the E
3. How they are able to ensure that the role of evaluation benefits the health system-related program (or project), the community served, as well as the broader audience

Morgan Michael Bailey
Replied at 1:17 PM, 3 Apr 2012

One of the primary difficulties in M & E is the lack of sincere system wide support. Design, implementation, data collection, and evaluation need support at all tiers of involvement, from those who develop policy to those who collect data. I am sure all of us agree the crucial need for M & E and attest to some level the lack of general support/funding for this area. Balancing the need for success and impartial evaluation are often quite challenging.

This aside, the elements to carry out effective M & E exist and if acting in concert with one another could provide extremely useful targeted information. Touching on and expanding on some of the earlier posts I can see several necessary components.
• System wide (including possible donors) support for impartial evaluation
• Defining realistic measurable goals starting at the program planning stage with evidence based metrics.
• Dynamic M & E design, capable of buffering system wide changes. Such designs should be to some degree tailored to location as no blanket design exists, however metrics should be comparable on a larger scale to provide a greater view of efficacy between projects.
• Integration of cleverly designed technology that not only collects relevant data, but also incorporates the understanding that individuals only use technology that is relevant to them. This is evidenced by how pervasive cell phones now are (you need to do very little to justify why using a cell phone is beneficial). We need to find similar ways to justify M & E and the integration of technology for its purpose.
• Piggy backing off data already being collected and use of passive data collection
• Building momentum by capacity building and justifying the true need and benefits of M & E.
• Communication (such as this very panel). There are many disjointed efforts to address the same concerns and overlap between organizations if often overlooked. Communication from policy makers, donors, academics, health workers and so on are needed to efficiently move forward.

Lisa Hirschhorn
Replied at 1:19 PM, 3 Apr 2012

To follow-up on the excellent points from Lorna McPherson, we have been working across the PIH sites to try to balance the data we COULD collect (or already have) with what we should focus on to ensure data quality and the support the use through a robust feedback loop an one which presents enough data but not so much that the measuring becomes the ends not the means. We have been having multidisciplinary meetings which include M and E, the end users as well as HMIS when electronic data are included, often start with a simple logic model and from that an indicator list, as well as drafting up mock reports so people can see what the data might look like (monthly, quarterly etc). One additional approach has been to start with what we suspect may be too much, but test it for a month or 2, see which are actually used in program monitoring and improvement (either improve from a gap or celebrate as a success) and equally important how many M and E resources it takes (collect, clean, analyze, put into usable formats such as dashboards, reports, graphs etc). I would be interested to hear how others manage this negotiation...

Lorna McPherson
Replied at 2:44 PM, 3 Apr 2012

I think that what we need is "M & E for Dummies" since M & E  and it's important are not clearly understood at the level of the front-line workers.  At every level in the health system we need to make it clear what data is collected, why and to be used by whom

Balsama Andriantseheno
Replied at 3:28 PM, 3 Apr 2012

We all can feel that you are a strong advocate of Impact Evaluation Paul Basinga, so am I! But allow me to say this: I have encountered many people who don't appreciate much Impact Evaluation because to them its' another tool to sell "RCT" ! My question is this: is there any other method other than RCT to conduct a rigorous IE? The main problem of course is when you have to deal with too many qualitative indicators in your project monitoring plan!

Bali ANDRIANTSEHENO

Balsama Andriantseheno
Replied at 4:17 PM, 3 Apr 2012

I tend to agree with what 
Lorna McPherson  says below but let's not forget that we should not stop right at those "
what data is collected, why and to be used by whom ". We also need the end data users (analysts? decision makers?) to send feedback to the data providers to tell them how effective and useful the data they produced were! If that dialogue between the two parts is not there, the data providing side will always feel like they are working for nothing, not knowing if they did right or not! Only that kind of dialogue ensures that the right data (in quantity and quality) are collected and fed into the M&E system!

Bali ANDRIANTSEHENO

Paulin Basinga
Replied at 9:06 PM, 3 Apr 2012

Thanks for the exchange Bali, you made a great point. While I’m strongly support the use of Randomized Controlled Trials (RCT) when the objective of the exercise is to determine the causal relationship between a program and outcome…I also strongly recommend the use of RCT only when the randomization will not jeopardize the good implementation of the program. It’s advised to randomize when the implementation and research teams both agree that the randomization will be the most efficient way ensure fair and transparent program benefits assignment….especially when one is planning a national scale up of a program and is wondering which districts / provinces / regions will go first and which ones will go after. The randomization can provide a rational for phasing the program while providing the opportunity later to learn.
But coming back to your question about other methods rather than RCT that can be used to assess the impact of a program: there are of courses others methods than RCT that can be used for program evaluations. The two mostly used are the before and after AND the comparison of those who enrolled with those who did not enroll to the program. But the two methods while informative have different biases which jeopardize causation.
The good news is that many researchers are now looking at ways of using existing monitoring data and other available population level data (DHS, MICS,..) to mimic the randomization. You can find a good description of one of the method (Platform) here : Victora CG et all. Measuring impact in the Millennium Development Goal era and beyond: a new approach to large-scale effectiveness evaluations. Lancet. 2011 Jan 1;377(9759):85-95. Epub 2010 Jul 9.
I will be happy to share with you our experience in trying to work with routinely collected data for program evaluation.

Lorna McPherson
Replied at 9:19 PM, 3 Apr 2012

And to add to what Balsama says, M & E must be used to inform a new round of planning

Morgan Michael Bailey
Replied at 1:51 AM, 4 Apr 2012

Stepping back from the nuts and bolts of M & E for a moment I was wondering if anyone could shed some light on their experiences defining the initial goal(s) under evaluation. Specifically:

(1) How dynamic are the goals themselves and how do they encompass both health and social aspects of the intervention/project?

(2) Do the goals originate purely from a health perspective and how often is M & E considered in this planning phase?

(3) To what end are the end recipients considered in terms of what they might define as a successful intervention/project? This may be more along the lines of qualitative information, but nevertheless useful in understanding the subtle nuances and impacts on an individual level. In terms of data collection, if metrics initially defined by the target individuals or community are included there might be greater momentum of M & E with those who may not understand the relevance of the hard data. Though I feel it should be noted that target communities or individuals often have trouble defining the success of a solution to a problem they may not see as existing.

Sandhya Ahuja
Replied at 2:21 AM, 4 Apr 2012

Approach to Evaluation of HMIS:
a. Understand the objectives of HMIS evaluation
b. Understand the approaches to HMIS evaluation
c. Understand what to evaluate applications for
d. Using evaluation to improve HMIS design
What do we evaluate HMIS for?
HMIS systems need to be evaluated for knowing whether it serves the larger purpose of improving programme outcomes and health impacts. Such evaluation help improves its design and functioning.
The contribution of HMIS to the overall health impact could be measured by four questions about on HMIS output:
• Is the information that the system capturing the most relevant?
• Is the information made available reliable in terms of quality (completeness, consistency, accuracy) and timeliness?
• Is the information user-friendly enough to support action- (ease of access, ease of interpretation for the programme manager)?
• Is there the capacity to act on the information provided?

Kenny Sherr
Replied at 11:45 AM, 4 Apr 2012

How can we build internal capacity and ownership for carrying out M&E activities?
This is a fundamental question that we need to be considering at all stages of our work, and I think there are probably a few key considerations that can lead to more effective, sustainable and locally-driven approaches (including M&E). I’d like to consider this question in two parts—the first focusing on capacity building, and the second on building local ownership.

Effective capacity building starts with effective training approaches, and in the area of M&E I’ve found that many technical health cadres leave pre-service training with little understanding of data and surveillance systems. This includes the nuts and bolts (such as what registries and reports are the responsibility of front line health workers and district managers), as well as how to use M&E to support decision making (problem identification, solution generation and testing). Placing M&E competencies into pre-service training is not easy, both because of competition with other competencies and because health training center faculty often have limited capacity in M&E. So, much more work needs to be done in this area, and it would be interesting to hear from participants in this forum if they can identify best practices for improving training efforts in M&E for pre-service training?

Realizing that training alone does not build sustainable capacity, approaches need to be considered to ensure that skills in M&E continue to be developed post training. This is an area that we at Health Alliance International have been focusing on in Mozambique, focusing on provincial, district and facility-level capacity to strengthen data systems and decision making linked with routinely available data. To strengthen data quality, we have been coupling annual data quality assessments across a sample of facilities with a focus on data quality through routine supervisions and ongoing cross-checking of data to identify and resolve possible data errors. One lesson learned from our work so far is that it is essential to focus on a diverse range of personnel—to engage both M&E personnel and health systems managers (in both overall management and management of technical areas such as MCH, malaria, EPI, etc). By engaging all staff in assessing and improving data quality, and creating routine habits and norms for a wide group of health workers from different levels of the health system, the responsibility to strengthen data quality is widely shared and sustained. A second lesson learned has been that ongoing mentoring is essential for building skills post-training.

Linked with building skills to improve data quality is building capacity for using data for decision making. There are a couple of points that I would like to make here. The first is that similar caveats described in the data quality capacity building described above apply for ensuring that M&E efforts link with decision making. Working across a diverse range of health workers, and building relationships of trust through ongoing mentoring, are effective approaches for building competencies. A second strategy to link data with decision making is to develop and apply simple tools that assist in problem identification, solution generation and testing solutions on an iterative basis. For example, data dashboards and continuous quality improvement are both approaches that help to disaggregate data to facility/district levels and look at secular trends—two key strategies to take the pulse of efforts (ie: identify where there are problems or where things are going well), and then launch into the process of going beyond the data to understand the why and the what next. A question for the group—can you speak to experiences (successful and less so) with linking M&E to change processes? It would also be good to hear about experiences with tools that can facilitate this data to decision making linkage.

Now on to build ownership. Many of us work within broader health systems as technical support rather than implementing agencies, and there are strategies that can help build ownership, and therefore effective and sustainable M&E activities. First, it is essential to not work in isolation, but as much as possible integrate into the broader system. For example, our staff in Mozambique (with Health Alliance International) are based in the provincial health directorate offices, rather than a separate project office. By working side by side with health managers the ‘project staff’ are integrated into the provincial apparatus, and by being part of the same team their activities become shared with health system managers. It may take more time to implement activities compared with approaches working more peripherally to the health system, but ultimately activities will be able to achieve broader scale and with more likely sustainability than if done in relative isolation. A second approach is to think big. Health systems are big (and often unwieldy), and its fine to begin small. But having the goal of reaching scale, and building this into the design of activities from the beginning, will facilitate ownership by the health system and ultimately achieving scale. Finally, think systemically. To build ownership in M&E we need to work across multiple program areas and functional units of the health system. Focusing on M&E for one disease specific area, for example, will not build up the entire M&E system, and as a result will lead to fragmented capacity (with potentially distortive and less sustainable results). There is a balance to achieve here, and this balance requires compromise (such as not including all the indicators that may be seen as essential to some, but not to everyone), but the end product will be more sustainable. Depending on the structure of the health system, ensuring ownership may include engaging potentially non-traditional units (such as planning and cooperation, which in the case of Mozambique is where the HIS is based), but working with a broader group of stakeholders is essential.

Sandra Irbe
Replied at 12:56 PM, 4 Apr 2012

I would like to echo Kenny's points. From the perspective of the Global Fund as donor agency coming with its own Monitoring and Evaluation Toolkit and requiring that grant-related indicators are monitored as part of the national MandE effort, I would like to point out the following: 1) there is still limited understanding or ownership of a meaningful MandE among decision makers; 2) MandE efforts are often isolated and perceived as a science apart; and 3) donor agencies tend to build their own MandE frameworks that are not necessarily integrated with the mainstream HIS and health-related data analysis at the national level. For solutions to these issues, I fully agree with Kenny's points. We need to make MandE simple, understandable to all and analyzable for strategic decision making. Donor agencies have to work hand in hand with national entities and strengthen MandE efforts at the national level.

Kind regards

Sandra Irbe
Fund Portfolio Manager, Eastern Europe and Central Asia team, the Global Fund
(Message sent from a portable device)

Lisa Hirschhorn
Replied at 2:46 PM, 4 Apr 2012

So as follow-up, my request to participants is to share any examples where this simpler model has been effectively implemented? Are there tools which have been useful and which you can share in negotiating the "less is more" approach with stakeholders, in ensuring data quality or in use?

Kenny Sherr
Replied at 5:01 PM, 4 Apr 2012

I’d like to briefly respond to Morgan and Sandhya’s posts.
Morgan raises questions about defining the goal(s) of evaluation. It would be interesting to hear from participants concerning the scope of evaluations, particularly the interplay between health system and non-health system determinants of health impact measures. I know that we struggle with this question, particularly in demonstrating improvements in health status when there are so many extra-health system determinants of health status that we won’t affect through our health systems-focused interventions. Morgan also asked about engaging stakeholders in evaluation, and in my experience this is a common and necessary component of a comprehensive evaluation. I believe that there are limitations to what we can quantifiably understand, and a mixed-methods approach is an essential in asking fundamentally important questions (such as how/why the intervention worked/did not work; and what are the core components of an intervention that can be generalized to other geographic or intervention areas).
Sandhya raises the point of why and how we should evaluate HMIS. Generally HMIS is seen as a means to an end, and assessing and improving HMIS often focuses on how well these systems provide data and less so on the subsequent phase—how are data used and what can be done to improve data utilization. Sandhya notes capacity to use data, and considering how to package HMIS outputs to improve data utilization, are two questions to consider in evaluating HMIS.
I would like to follow up on these two points (how to package data to facilitate data utilization and how to build capacity for data utilization) and a question raised by K. Amico early on in this panel—‘how M&E can be used on the small scale – individual clinics versus networks of clinics’ and pose a question for the group. I’d be interested in hearing from participants about successful approaches to improve data consumption at different levels of the health system – facilities, networks of facilities, and administrative units (from district up to national). What has worked at small scale? What has worked at large scale? What worked at a pilot level (with intensive resource inputs) but failed to achieve scale? What was scaled up and sustained effectively?

Lisa Hirschhorn
Replied at 9:49 PM, 4 Apr 2012

Some key themes and some questions
Looking over the last 2 days, some themes come through, but equally important, a number of questions

1. The importance of strategic decisions when to do impact evaluations, when RCTs are needed (and importantly when they are not feasible)
2. Collected data can be used for different purposes-“not all data collected needs to be analyzed” but realizing that the level of detail data needed for individual services does not mean it has to be analyzed for program M and E
3. Key Components including defining realistic measurable goals and metrics; utilization of existing data, capacity building and ensuring understanding benefits of M and E,
4. The need for tools and work to teach M and E to front line workers (feasible and affordable); role of pre-service training and post-services
a. HAI example included “coupling annual data quality assessments across a sample of facilities with a focus on data quality through routine supervisions and ongoing cross-checking of data to identify and resolve possible data errors; and that it is essential to focus on a diverse range of personnel”
b. Needs to include building capacity for using data for decision making
5. Importance of building ownership (also linked with supporting data understanding and use) which includes the need to make M and E simple, understandable to all and analyzable for strategic decision making
6. Better dialogue between the data folks and front line providers on how data can be more useful
7. Importance of HMIS as a means to M and E but also the need to do M and E of HMIS

Lisa Hirschhorn
Replied at 9:50 PM, 4 Apr 2012

Some questions based on the last few days for which it would be great to hear from discussants include:
1. What are examples of successful design and implementation of simpler but effective M and E?
2. Are there tools which have been useful and which you can share in negotiating the "less is more" approach with stakeholders, in ensuring data quality or in use?
3. Are there programs which have been successful in doing similar work to HAI in building capacity for data use for program improvement and building ownership
4. How do you ensure that recipients of data are considered in terms of what they might define as a successful intervention/project?

Itete Karagire
Replied at 3:07 AM, 5 Apr 2012

We can also think on data demand and information use. We have to improve use of data to strengthen our program. We have many system and source of information (HMIS, RTT, DHS,..). Often these systems exist in countries with highly decentralized planning and service delivery structures, we need to introduce data demand and information use for evidence based decision.
 


Itete KARAGIRE
C/O : CCM/Global Fund

Lambert Wesler
Replied at 7:05 AM, 5 Apr 2012

I'd like to provide a few comments on investment in data systems , data utilization and how data can help improving quality of care and services.
Pierre Baker made a critical point about investing in local data systems. It's a critical component to strengthen M&E. I have to acknowledge PEPFAR initiative in Haiti was a booster encouraging three major partners , Partners IN Health, ITECH and GHESKIO to improve their data systems in order to provide reliable, accurate and meaningful information.
PEPFAR also speehead another initiative in which data will be used to improve quality if care (HIVQUAL now HEALTHQUAL)

Zanmi Lasante, the largest sister of Partners In Health took on this initiative.
Prior to that, we used to utilize data mostly for program reports as required by donors and the Ministry of Health.
In the past 4 years with the HIVQUAL initiative (QI methodology) the organization shifted the paradigm toward a more practical approach to use the data generated from our activities for program evaluation in order to address the gaps identified in the whole system to deliver quality of care. Using a set of ten indicators to look at key factors that may facilitate or jeopardize our success in delivering standardized quality HIV services to the people that we serve allowed us to show staff members at sites that we are paying attention to the overall performance of the HIV program and to provide both positive and negative feedback. Those feedback are provided using phone calls, emails, site visits and during monthly all sites meeting.
This initiative led staff toward a different behaviour to handle data that they have to submit. Staff members understood that they need to pay attention not only to the quality of data that they are producing, but also to the quality of care that the data is showing that they are delivering.
Other initiatives that leverage the utilization of data across the ZL sites are the PIH cross sites indicators and the QI activities that are being implemented and that further emphasize the importance of having reliable data to measure performance in different areas in order to better appreciate the general impression that at ZL we have put in place systems that allows us to deliver the level of quality of care that we aim at providing to people who are in need.
With all those new initiatives going, we are able to take a closer look at our programs/projects and to identify areas of weaknesses that need to be addressed in order to better strengthen and sustain our work.
With the QI activities we want to make staff members understand how important it is for them to measure/evaluate their work in order to ameliorate the quality of the services. The M&E activities are in place to support this effort.
Question still remains about how to maintain a high level of motivation in the face of additional programs demand for data and other competing prorates?

Marie Connelly
Replied at 10:23 AM, 5 Apr 2012

I'd like to thank everyone for the fantastic contributions to the Expert Panel so far - we're thrilled to see such a rich discussion taking place here.

While we'll continue to address some of the key questions outlined at the start of this discussion, I wanted to invite everyone in the community to share their own questions for the panelists while we have them here for the next two days. Please let us know what challenges you have faced, or are facing, when it comes to integrating M&E in your projects. We're also eager to hear success stories and hope you'll share strategies that you've found particularly effective.

Thank you all again for joining us in this Expert Panel discussion - we look forward to hearing your questions!

Orhan Morina
Replied at 1:54 PM, 5 Apr 2012

I found the input and comments from all the participants very useful. There were so many relevant topics and experiences from the field that were shared and discussed. I have an additional one related to the importance of continuous monitoring and evaluation at local facility level to contribute with.
During almost a decade of involvement in managing and providing technical support to health programs in Africa there were (and still are) chronic issues related to healthcare providers (and support staff) being overworked and not being able to complete their tasks. This was also seen within the M&E sector and affected the data collection, analysis and use for decision making. One of the interventions to address this issue is to make sure that data is easily generated and analyzed at local health facility level. It is also important to make sure that the same data feeds into the higher levels of the system (district, regional and national).
To support the easier data collection and processing simple and user friendly electronic tools that help capture core quality care indicators at local facility level were to be developed. These are tools that meet the basic facility requirements for monitoring, help with the continuous evaluation and also address the ownership issue. Being able to easily enter the data and automatically see the results in visual form can help both the M&E staff and managers in noticing the trends and recommending changes during the actual program implementation phase. The tools can also provide for data consolidation and provision of data to higher levels for relevant decision making and policy development.

Dr. Orhan Morina, Senior Technical Advisor for HSS, Catholic Relief Services

Ron Hebert
Replied at 2:47 PM, 5 Apr 2012

Dr. Morina raise the very valid point "To support the easier data collection and processing simple and user friendly electronic tools that help capture core quality care indicators at local facility level were to be developed." This suggested approach has been in place in the developed countries since the mid 1970s, and in a few developing countries like Jamaica since 1997. The data quality is far superior when it is edited at the point of patient contact, and the data can be instantly distributed to those with a need to know. The tool to use is called a PAS (Patient Administration System), and the e-approach costs much less than the approach of using manual paper-based systems. I covered this earlier in this forum (points #21 and #24), and the lower cost matter was confirmed by Osman Raza, of Pakistan, and now with the Global Fund (point #23). Hopefully all readers of this interesting forum will quickly move away from the archaic manual paper-based approach for data collection - for the betterment of health, and the economy - in their respective countries.

Morgan Michael Bailey
Replied at 6:46 PM, 5 Apr 2012

Ron has a great point. The amount and quality of digital data collected worldwide is increasing at an amazing rate. Even beyond the of use of electronic devices by interviewers and health care personnel, the explosion of handheld mobile devices in developing countries is going to close the digital divide at a much faster rate than the health divide. In the coming years there are going to be rapidly increasing opportunities to use individual's personnel electronic devices (taking in all privacy and ethical considerations) to help measure the utilization and effectiveness of health programs. I would be curious if anyone has investigated this end of digital monitoring.

Balsama Andriantseheno
Replied at 6:48 PM, 5 Apr 2012

I would like to share a common case that usually happens with big Donors' Programs. For the last two decades, Donors tend to design big programs and split their implementation in components between more than one implementing agencies. In their program design proposal a good Program Monitoring Outline is written down and agreed upon! But when it comes to the implementing agencies, these guys have each one of them their own "M&E culture" and practice which in most cases leads to as many M&E systems as Implementing Agencies! The problem is when the Donor wants to get an overview of their achievement, in most cases it appears fragmented with hardly a continuation and clear links between the pieces! What to do? We have experimented it in a Stocktaking exercise conducted for USAID in Madagascar and the main big recommendations we came up with were that:- the Program M&E system has to be built in a top-down process FIRST and not the opposite- there has to be a
strong interactive process between the Donor and the Implementing Agencies to set up an iterative process between both parties and between the Implementing Agencies themselves in order for everybody to be clear on the importance of the NECESSARY links between their respective objectives and the Program's own overall goals and objectives.
Any thoughts and experiences on this subject?

Bali ANDRIANTSEHENO

Tony Roche
Replied at 8:28 PM, 5 Apr 2012

A big thanks to the panelists and contributors to the great discussion.
I was just wondering if anybody had experience with M&E in assessing safety and quality in LMIC surgical programs. As a collaborative capacity-building group in surgical and anesthesia care, we have struggled to find reliable, sustainable models for benchmarking and QI/QA for such settings. The goal is to implement effective tools to measure the most relevant and important surgical quality indicators in an ongoing manner, and follow up with providing regular benchmarking data back to host sites in LMIC's.

David Beran
Replied at 12:48 PM, 6 Apr 2012

Sorry for joining this excellent discussion so late. Very interesting and important points raised. Just wanted to highlight a few points as well as share my experience from a model I used in Mozambique to assess the health system (baseline), then implement specific projects (with consistent monitoring) and then an overall evaluation.

I just wanted to highlight the challenges of data collection in some settings and echo most of the comments already made. That said so much routine data is collected at different levels of the health system that it should be seen how the data and processes can be used for M&E purposes. If specific data needs to be collected for the purpose of a given project it should be seen how it can be integrated in the routine data collection process without too much disruption to "business as usual" and when possible using the exisitng infrastructure.

Also much data is often collected, but not used in decision making or for any other purpose other than data collection. Data needs to be used and its importance highlighted so people understand why it is being collected. In parallel people need to be trained in data analysis and in using data to improve what they do. M&E should not be viewed as something external to the project, but as integral part of it.

With regards to this data collected that exists it should be seen how this can be used as a baseline.

I could not agree more with Comment 54. With regards to Comment 62 I would like to share my experience in Mozambique where I carried out an initial asessment of the health system Ref1
using a health system assessment tool Ref2. Following this specific interventions were designed with consistent monitoring.Ref3 and Ref4. At the end an overall assessment using the same tool was carried out and identified areas of success and failure.Ref5

The lessons from this was that the initial assessment using a standardised tool allowed for an understanding of the problems, how these could be addressed and a baseline. This also helped establish some indicators that could be used in M&E. Most importantly this process of M&E allowed for lessons to be learnt as the project was developed and implemented. By having clear and different indicators allowed for an assessment of progress in different areas and also seeing where things were not as successful as expected.

Thanks to everyone for such an interesting discussion which I hope we can continue. Thanks to GHDOnline.

Attached resources:

Lisa Hirschhorn
Replied at 3:34 PM, 6 Apr 2012

Thanks all for ongoing discussion. To respond to Tony Roche re: M&E in assessing safety and quality in LMIC surgical programs and in particular relevant to "a collaborative capacity-building group". Measuring capacity building is an area in which I think new and relevant indicators are still needed. We are working around trying to measure the impact of training and support using Kirkpatricks model for training evaluation but focused more on level 3 (are they doing what was taught) and even level 4 (are patients doing better), as well as from a systems perspective 9if we train in a procedure, is it still be done (and being done well) in 6 months or a year. in terms of specific performance measures, we have focused more o things we can measure more simply as a routine (some from the surgical checklist), but interested in how other groups have done this. Observation? Surgical outcomes?

Maysa Alkhateeb
Replied at 4:04 PM, 6 Apr 2012

Thank you very much for this excellent discussion , a question similar to Roche question is about how to do continuouse M& E in public primary health care setting, providing that it`s not mandatory for the employees to follow your instructions as you ar a consultant for them from a certain NGO, as NGO employee : what about the data we got, who owned it? to what extent we can use it?

Lisa Hirschhorn
Replied at 4:06 PM, 6 Apr 2012

Posting for Bethany Hedt-a reference on a more strategic approach to measuring data quality using LQAS done with the Malawi national HIV program which takes less resources than full monitoring, and so could leave more time for improvement
- http://www.who.int/bulletin/volumes/86/4/07-044685/en/index.html

Kenny Sherr
Replied at 5:48 PM, 6 Apr 2012

I’d like to comment on Maysa’s questions about institutionalizing M&E in the primary healthcare setting from the perspective of an agency that does not have the implicit mandate to manage service in this setting (ie: as a NGO employee). Maysa points out that it’s not mandatory for health workers in a public sector health system to be responsive to a NGO that has funding to support M&E activities. While that may be true, I think most of us would argue that it is (or should be) mandatory for health workers at the health facility, management, research, policy (basically everyone) to pay attention to their data and use these data as a starting point for improving (and measuring improvements) in service delivery.

As part of a NGO working to support the creation of the habits around data consumption and utilization, I believe that what makes the difference between an effective and sustainable approach and one that is less effective (or effective but not sustainable) can often come down to the interface between the NGO and the apparatus they are supporting. NGOs that work humbly and in true collaboration with the public sector are going to have more success with building continuous M&E in that sector. What does it mean to ‘work collaboratively’? In my mind collaboration comes down to setting up a structure, mindset and approach that leads to shared goals and objectives. Recognize and try and minimize the resource disparities, show flexibility in meeting the needs of personnel in the public sector, jointly plan, and be transparent (to name a few).

Maysa also raised the question about data ownership and to what extent can data be used? If support for M&E activities is truly collaborative, then the question of data ownership and use becomes less relevant. Though the public sector will always ‘own’ the data, if the project is truly integrated into the primary health care system, then the decisions about ‘who’ can use data and ‘for what’ can be discussed and decided on jointly by the public sector and NGO management team. But the first step is to build up trust between the two groups, which takes time, dedication, and commitment.

Lambert Wesler
Replied at 6:13 PM, 6 Apr 2012

My comment is to echo Kenny Sherr response to Maysa"s questions about institutionalizing M&E in the primary care. I would add that it is in the interest of NGO working within and for the public sector to make strengthening M&E in the public sector as part of their interventions. Oftentimes we tend to use these data and information from public facilities for research , policy and so on.
Most if not all NGO programs aims to serve the public and targets populations are always same. M7E should not be separated from clinical interventions as data of good quality will help to improve care and services for the public at a large.
Partners in Health experience in Haiti shos a good example of collaborative work, dual data ownership. All interventions are provided for following MOH strategic and operational plan. As a matter of fact, NGO employees including M&E staff work in the collaborative spirit to generate good data quality for donor's focused programs and primary care as well.

Agree that the partnership, trust and engagement are built at the planning phase of the programs.

Ghulam Qader
Replied at 12:21 AM, 7 Apr 2012

I am entirely agreed with Morgan on involvement of end beneficiaries in evaluation process. This utmost component of evaluation process seems to missed during evaluation. However, the successful accomplishment of project gain is conditioned to the engagement of them in entire evaluation process, from designing to implementation and M &E. This will assist to adopt the project to local context and create ownership among end users. Thus, the project will be well implemented and ultimately achieve its objectives and impacts.

Attached resource:

shahidul haque
Replied at 12:33 AM, 7 Apr 2012

Dear moderator, I do agree with this monitoring and evaluation process. I am little bit confused about the evaluation who listen and respect this.If any thing is find not good in the grass roots than it is recommended to be closed but when any thing spark that it needs to be continued more for its sustainability or for better understanding of the society than who listen this as it is not in the hand of the beneficiaries.

who listen and for whom this is being done?.

Regarding Monitoring of the behavioral or attitudinal issue and monitoring of the products what types of indicator will judge or spark to be in the right way and to keep in right direction.?

Finally what we are looking for from all this and who's capacity we are trying to build and what impact we are expecting to see in the society and how will be judged at the end of the time to the people and society.

shahidul haque

Paulin Basinga
Replied at 6:11 PM, 8 Apr 2012

As we are getting to the end of this interesting debate on M&E, I would like to thank all the contributors to this very rich exchanges. I have learned a lot in the past days.
I would like to comment on a point widely discussed last week, the one also raised by Shahidul Haque : capacity building. We all agree that when health professionals who are in the fore front of the M&E activities implementation should be the first to understand why these activities are important for their everyday work. Thus capacity building providing knowledge and skills necessary to design an M&E plan, implement, maintain, ensure quality, and analyze the data from M&E is crucial. I also believe that we can ensures a strong and sustainable M&E activities when health professionals are empowered to dig into their routine data, analyze the data in Excel (or any other statistical packages), and detect gaps to be addressed by an adaptation of their program.
I would like to share some examples of capacity building implemented in Rwanda through collaboration between the Ministry of Health, the School of Public Health of the National University of Rwanda and different development partners and universities working in Rwanda.
The Ministry of Health has implemented TRAC-net system, which is an electronic M&E system that has been capturing data from ART services (prevention services were later added) with only at the national level. With support from Voxiva, the SPH developed and implemented training sessions for M&E staffs working with the system. Because most of the national level M&E staffs were pursuing their Master in Public Health at SPH, most of them acquired data analysis skills and used the data from their everyday work for their final year dissertation. When the academic committee of SPH allowed students to do secondary analysis of any data sets they have been collecting as part of their work with partners or Ministry of Health, we saw an increase interest of M&E officers in data analysis and a focussed follow up of the implementation of the recommendations from their analysis.
A recent grant from the Doris Dukes Foundation the Ministry of Health, Partners in Health (Brigham and women’s hospital, Harvard U) and the School of Public Health generated a great capacity building opportunity. When the Minister of Health was negotiating the grant, which is a 5 years grant aiming at implementing and evaluating compressive innovative health strengthening activities in two districts in Rwanda, she insisted that district health officers benefits from the grant with a formal training. The NUR - School of Public Health of Rwanda with support from Harvard University has been able to start a formal training of Master of Sciences and PhD for officers involved in the grant, using data generated by the project. This has strengthened the quality of the data and a sense of ownership by the students involved in the project.
Any other example of collaboration between Ministry of Health, national and international NGO and local academic institutions in capacity building?

LORENZO DORR
Replied at 2:12 AM, 11 Jul 2012

Dear Marie;

Thanks very kindly for sharing this to genrate discussion amongst community members. While my opinion may not provide all the answers or the right answers as would those specilizing in M and , kindly allow me to share with the community some thought on the matter.

M and E is core to the successful implementation of programs and projects. to derive intended outcomes. According to Stephen R. Covey,
''People and their managers are working so hard to to be sure thing are done right that they hardly have time to decide they are doing the right things.''
In this regard therefore, to be able to balance whether things are done rightly and we are doing the right thngs, we must distinguish moniytoring from evaluationas the two terminologies, though similar in context are confusinglyy employed. Morevover, a few things must be considered.
1. What kind of evaluation do we want to carried out.
2. what do we wnat to evaluate. is it the progress, implementation of the activities, cost effeciency of the project, effectiveness, impact or sustainability model/ mechanism

This is a very seroius challege for most projects and programs in resource poor settins. Funding agencies are eager to see an elaborate M and E framework laid out in porposals but most often than not there is no fund in the total buget to support the activity. In view of this, to conduct an effective M and E as key component of health system strenghtening, the framework must be supported by funding.

A successful and effective evaluation, when conducted, can be measured if it addresses achieved against planned activities; if the method used adds value to the process-speaking of reliability of the method employed; does it porvide information that will allow for informed decision and it must provide lesson for future planning.

There is no process without challenges and so it is with Monitoring and Evaluation.
1. Staff collecting the data may not be adequately trained to do so, in cases where local staff are also hired to assist with the process.
2. M and E is time consuming
3. Activity is costly and organization most often do not have enough money to conduct the exercise. Most often than not most of the evaluations conducted for projects are carried out by external staff- consultants who are paid huge sum of money
4. People usually are disenchanted by the report generated which is oten critical of the project failures and so the report is often not read

To build the internal capcities the staff, there is need to provide formal and informal trainings for national staff in M and E as a way of strenthening the health system. While it is important to invite external evaluator for transparency or to certify donor regulations/policies, it all the more crucial to conduct periodic internal evaluations so as to fact track constraints and seek quick and appropriate interventions.

This Expert Panel is Archived.

While this Expert Panel is no longer active, we invite you to review and recommend past replies and resources. Membership for this Expert Panel is closed, but we hope you'll join us in one of the many communities on GHDonline.

Panelists of Integrating M&E for Health Systems Strengthening and GHDonline staff