This Expert Panel is Archived.

While this Expert Panel is no longer active, we invite you to review and recommend past replies and resources. Membership for this Expert Panel is closed, but we hope you'll join us in one of the many communities on GHDonline.

Panelists of Practical Evaluation Designs for Improving the Quality of Health Care Implementation and GHDonline staff

Practical Evaluation Designs for Improving the Quality of Health Care Implementation

Posted: 18 Jan, 2016   Recommendations: 25   Replies: 83

GHDonline and Health Systems Global (HSG) are teaming up to bring you an exciting Expert Panel on evaluation approaches to implementing health care improvements. “Implementation Science” is an increasingly hot topic for those interested in understanding what it takes to close the gap between clinical evidence, and what we are doing to improve health and health care systems. As systems-based approaches gain popularity, implementers, policy makers, and researchers are increasingly interested in ways to evaluate the effectiveness and findings of these interventions.

The Quality in Universal Health and Health Care Thematic Working Group (TWG) is organizing a series of events, beginning with a live webinar on January 25 at 7am EST following by a facilitated GHDonline discussion the week of January 25–29. We are hoping to engage health system policy makers, leaders, implementers, and researchers who are interested in evaluating health system interventions.

We’re pleased to welcome the following panelists for the webinar and facilitated discussion on GHDonline:

  • Rohit Ramaswamy, UNC Gillings School of Global Public Health
  • Lisa Hirschhorn, Harvard Medical School and Ariadne Labs
  • Gareth Parry, Institute for Healthcare Improvement (IHI)

The webinar will include a panel discussion moderated by Pierre Barker of IHI and UNC, chair of the Quality TWG. Following a case presentation on a hand hygiene project being implemented in sub-Saharan African hospitals, the panelists will discuss key evaluation issues. Spaces for the webinar are limited, so we encourage you to register now: https://attendee.gotowebinar.com/register/2287421063797641474

Following the webinar, GHDonline will host a week-long discussion facilitated by the panelists. A key evaluation question will be posed for discussion each day with daily synthesis and commentary of discussion from panelists. A recorded synthesis of the discussion and findings will be posted to the GHDonline and HSG communities following the week-long discussion.

We look forward to a rich discussion next week – please join the conversation and share your questions or comments!

Replies

 

Sr.Evangeline Castillo Replied at 8:45 AM, 22 Jan 2016

Thank you so much for this invitation. Here's looking forward to learn & understand better what and how "Implementation science" is being communicated and implemented in the main stream health care system, just like us here in the third world country (Philippines). Am interested to know more & learn from the panelists.
Hoping to be part of the engagement soon.

Erlyn Rachelle Macarayan Replied at 11:10 AM, 22 Jan 2016

Glad to see collaborative efforts between HSG and GHD. I'm definitely excited for this panel!

Sara Canavati Replied at 11:31 AM, 22 Jan 2016

Thank you very much for this exciting invitation. Looking very much forward to the discussions. Best of wishes, Sara

Abha Aggarwal Replied at 12:17 PM, 22 Jan 2016

Thank you for the invite! Am excited to hear and learn from the panelists about Implementation Science.

Isabelle Celentano Replied at 4:54 PM, 22 Jan 2016

Thanks to all who have expressed interest in this panel! We're looking forward to a great discussion next week. Don't forget to register for the live webinar as spaces are limited: https://attendee.gotowebinar.com/register/2287421063797641474

Following the webinar on Monday, January 25 at 7:00am EST, the week-long discussion will begin here on GHDonline. Panelists will pose questions to the community beginning Tuesday, January 26. Feel free to post your quality improvement evaluation thoughts or questions before then!

Milan Gautam Replied at 7:15 PM, 22 Jan 2016

Thank you very much for this exciting invitation. Looking very much forward to the discussions.

Rachel Jean-Baptiste Replied at 7:42 PM, 22 Jan 2016

Thank you for the invite! looking forward to a lively duscussion. I hope one of the topics we will discuss is the value of real time minitoring of programs an how much more impact could we actually have if we had quicker access to information.

Blessing Masunungure Replied at 10:54 AM, 23 Jan 2016

Thank you for the invite! looking forward to a lively discussion.

Janet A DEWAN PhD CRNA Replied at 6:20 AM, 25 Jan 2016

Looking forward to hearing how to evaluate the effects of health system design on care delivery.

Sarah Weber Replied at 8:26 AM, 25 Jan 2016

Great training! Thanks.

Milan Gautam Replied at 8:46 AM, 25 Jan 2016

Great training! Thanks.

Yolanda Cachomba Replied at 2:14 PM, 25 Jan 2016

Great training! Thanks.
Em 18/01/2016 19:15, "Rebecca Weintraub, MD via GHDonline" <
> escreveu:

Lisa Hirschhorn Panelist Replied at 3:28 PM, 25 Jan 2016

On behalf of the panelists and Health Systems Global, I want to thank all the participants who attended the panel and individuals who have already posed a number important questions related to the topic. We are looking forward to an exciting week with lots of exchange of knowledge and experience. Each day we will pose again one of the core questions from the webinar and also work through the questions already posed by participants and the GHDonline community and new ones as they arise
Lisa Hirschhorn, MD MPH

Timons Sigo Replied at 5:10 PM, 25 Jan 2016

Thanks too...Such an insightful training!

Pierre Barker Moderator Replied at 5:20 PM, 25 Jan 2016

Here are some links on hand hygiene that are relevant to today's case history
WHO link for tools and resources http://www.who.int/gpsc/5may/tools/en/
WHO link for hand hygiene implementation guide http://www.who.int/gpsc/5may/Guide_to_Implementation.pdf

Lisa Hirschhorn Panelist Replied at 6:50 AM, 26 Jan 2016

I am writing to start off the follow-on discussion from the webinar. One of the main questions we raised was “What are the key evaluation questions that we need to address - generally and with specific reference to this case?”

Some initial thoughts focused on the importance of first understanding the main consumers of the evaluation and the actions targeted by the results as this will help you design an evaluation which answers your core questions. In addition the targeted behavior is strongly evidence-based. We have known since Semmelweis that hand washing saves lives in healthcare settings (although many still do not do it). So the behavior change outcome is important in itself. even if infection rates do not fall. However because there are so many causes of health care associated infections, tracking the broader impact (lower infections) is important to understand if there are more interventions beynd increasing hand washing needed to drop infections while continuing to focus on hand washing as high value activity.

So in asking what are the key evaluation questions for this project and more generally, you need to reflect on the evidence base supporting the intervention, as well as the goals.
Is this a demonstration project to help a national infection control initiative? In that case, the evaluation design needs to answer questions both about the implementation (how was it done) and the impact of the intervention (did it increase hand washing) and equally important a qualitative look into how to replicate and scale across variable settings.

If the project is designed solely on improving hand washing and so reducing infections at the participating facilities, then the evaluation questions which need to be answered are similar but less focused on broader learning. Are you able to change behavior (hand washing), what worked locally across the different settings and areas within the hospital, how to tailor (through rapid cycle improvements) for the targeted units based on the known variability in baseline rates and their change over time. Exploring change in infections is also still important as noted above, to understand the relative contribution of hand washing gaps in the overall infection rates.

What do other think? What other questions would you prioritize ?

A/Prof. Terry HANNAN Replied at 6:57 AM, 26 Jan 2016

Lisa, an excellent posting I will try and respond over the next few days.

Janet A DEWAN PhD CRNA Replied at 7:11 AM, 26 Jan 2016

Lisa,

In trying to tell if hand washing is the crucial ingredient for outcome of decreased hospital acquired infections I might compare the change in infection rates between surgical and non surgical patients. If it is all about hand washing, then decreased rates ought to be the same, but if there are other factors that are even more crucial to decreasing surgical infections , then infections in surgical patients should decrease less with only a hand hygiene intervention, than for nonsurgical patients.

Very good webinar. Thanks.

Anh Hien Ho Replied at 7:20 AM, 26 Jan 2016

Dear Lisa,
Thank you very much for your sharing,
Best,

Hien

Lisa Hirschhorn Panelist Replied at 12:11 PM, 26 Jan 2016

Thanks Janet
You bring up an important idea-how can we be more certain that the intervention is responsible for the change in the absence of a randomized controlled trail. This is a tension often encountered and one approach, using contrafactuals (controls which exist and are "good enough" is one of the solutions. One of the questions in the coming week focuses on just this question-optimal design and looking forward to ongoing discussions

Elizabeth Glaser Replied at 12:18 PM, 26 Jan 2016

It is important to set base case assumptions for points of comparison. The International Society For Pharmacoeconomics and Outcomes Research has a Guideline Index for Outcomes Research and Use in Health Care Decision Making. It is a good resource.

Attached resource:

Lisa Hirschhorn Panelist Replied at 1:21 PM, 26 Jan 2016

One important question raised by an attendee of the panel was "would this type of approach capture information such as ... is there clean water to be used by health providers for hand washing....? Often environmental factors are missed.... so how do you capture information in an evaluation that may not be obvious or intuitive to the sector leaders... ". Rachel raises a critical issue which is how to capture the contextual factors which may serve as either facilitators or barriers to success. If someone does not have access to clean water (or alcohol rub). they do not have the opportunity to change (versus the motivation or skills/ability). I often find that mapping some of the most important factors needed for success .is important as otherwise you can get overwhelmed measuring too many of these contextual factors. These include not just environmental/supply factors such as access to hand washing supplies, sinks or other water sources, but also organizational factors sch as leadership, workload and staffing. Using simple frameworks such as a logic model can help identify the needed contextual factors and then adapting or if needed developing a simple inventory list (for supplies, staffing etc) is important as some of your interventions may need to focus on these areas as much as motivating behavior change. Feeding back the information to people able to make these changes (which may be at the head of facility or higher level) can also be an important intervention. I am interested to hear experience from the field on experiences with effective tools people have used to measure these contextual factors such as the 4 S's (stuff, systems, staff, space) as well as the tougher areas of leadership and culture in their evaluations.

Lisa Hirschhorn Panelist Replied at 1:25 PM, 26 Jan 2016

Thanks for posting this great resource. It has guidelines not just for base case but for a number of other issues commonly encountered in evaluation design and reporting.

Dr. Osita Okonkwo Replied at 1:56 PM, 26 Jan 2016

Hi Lisa,

Thank you for posting these very useful resources. The panel discussions were enriching and have stimulated discussions on very critical evaluation questions. One of which I may add; 1) How do we evaluate the support structures that would sustain effective hand hygiene practice in resource poor settings? 2) What are the effective ways of evaluating clients' participation in order to stimulate health providers' compliance?
Evidently, participation of clients would help stimulate improved hand hygiene practices by the healthcare provider, and this needed to be further explored.

Best,

Madhuri Gandikota Replied at 8:12 AM, 27 Jan 2016

Hi Elizabeth,

Great Resource. ! Thanks much for posting it.

Regards

Lisa Hirschhorn Panelist Replied at 9:18 AM, 27 Jan 2016

Thanks Dr Okonkwo for the great questions. There has been increasing efforts to engage patients to help increase hand hygiene with variable success. From an evaluation perspective, I might start with some focus groups to understand attitudes and willingness to "speak up" from the pt perspective and receptiveness (and what you might need to do to change attitudes) from the HCWs. Has anyone ad experience doing this around hospital/healthcare interventions?

For the support structure, I would consider thinking about 2 areas which need sustaining: 1. resources needed (soap stockouts, running water etc) and 2. The actual behavior. We are doing work at Ariadne Labs in a number of areas using coaches to help change and then sustain behavior change including handwashing. More information can be found at www.ariadnelabs.org under the BetterBirth page. I am also attaching a link to a paper we published around the initial adaptation work which may be helpful

Attached resource:

Rohit Ramaswamy Panelist Replied at 9:30 AM, 27 Jan 2016

Hello everyone ! Thank you for participating in the webinar and on following up with the discussion. I wanted to start off with introducing the second question we discussed at the webinar: What are the appropriate designs to answer the evaluation questions ? To start off this discussion, I want to begin with introducing the idea of "utilization focused evaluation", which is a guiding approach developed by Robert Quinn Patton, a well known evaluation expert. The idea behind UFE is that evaluation is useful when users take ownership for the results, and that therefore evaluation has to be designed in a way that keeps in mind the intended use by intended users. Here is the link to utilization focused evaluation: http://betterevaluation.org/plan/approach/utilization_focused_evaluation

How does that apply to this case ? The idea is that there are multiple stakeholders who are interested in the results of the hand hygiene project. The nurses and midwives in the wards are interested in making sure that they are not spreading infection to mothers and babies. The ward managers want to ensure that the wards they are responsible for sustain the solutions that have been put in place. Hospital administrators are interested in ensuring that the QI change packages work at the entire hospital level, and to identify areas where progress and outcomes are not progressing as fast as is desired. The ministry of health is interested in scalable best practices that can be disseminated and implemented across the country. Researchers are interested in the scientific evidence that the QI interventions are resulting in sustainable improvements in infection prevention for mothers and newborns.

Given these stakeholders and their needs, there are clearly multiple evaluation approaches that need to be used, and these approaches need to balance between the information required, there resources needed to collect this information, the risk of getting incorrect information, and objectivity. What has been your experience about methods and approaches you have used for each of the stakeholder groups mentioned above ? How useful have these been in involving stakeholders and supporting sustainability of interventions ? Look forward to a good discussion !

Rohit Ramaswamy Panelist Replied at 11:50 AM, 27 Jan 2016

Dr. Okonkwo I wanted to follow up on Lisa's comments. Another way of evaluating support structure from the leadership/organizational perspective is to use one of the many organizational change readiness instruments that exist. One is ORIC (organizational readiness for implementing change). Here is an example of the instrument http://www.implementationscience.com/content/9/1/7/table/T1. In Africa,we have not used a formal version of this instrument but have used it as a guide to identify potential barriers to implementation and to provide leadership training and coaching to address these barriers. Our evaluation design in this case is longitudinal administration of the instrument. We have discussions once every quarter with leadership team of the facilities to discuss progress on building the support system and to assess how the organizational climate has changed over time.

Elizabeth Glaser Replied at 12:00 PM, 27 Jan 2016

Milka Ogayo is a guest moderator in the Nursing and Midwifery group here. She and I presented an abstract a year ago about a nurse led hand hygiene initiative at her tertiary care facility in Kisumu, Kenya.
Here is the abstract -
Nurses’ role in reducing hospital acquired infections through hand hygiene
Milka Ogayo, Ministry of Health (MOH) Kenya, Jaramogi Oginga Odinga Referral hospital, Kisumu, Kenya and Elizabeth Glaser, Heller School for Social Policy and Management, Brandeis University, Waltham, Massachusetts, USA.

Introduction and background: The World Health Organization (WHO) estimates 10-30% of all admissions result in Hospital Associated infections (HAI). The WHO and International Federation for infection Control (IFIC) have emphasized surveillance to augment the extremely limited data on HAI in the developing country context, in conjunction with hand hygiene protocols to prevent infection from patients to heath care workers. We describe the role of nurses in the prevention of HAIs in Jaramogi Oginga Odinga Referral hospital (JOORH).
Objective: To describe the nurses’ role in reducing transmission of HAI through implementing hand hygiene protocols in a regional referral hospital in Kisumu County, Kenya,.
Description: To reduce the rate of disease transmission in the hospital the nurses in JOOTRH have established evidence-based approaches to promote, implement, and maintain Hand hygiene protocols:
1. Production of Alcohol Based Hand Rub (ABHR). A lead nurse together with other nurses have been trained to prepare hand rub solution on a weekly basis this is distributed to all departments in the hospital right from the main hospital gate to mortuary. The hand rub has been mounted on the walls for ease access and is refilled. The lead nurse together with staff monitor availability and use of hand rub in within hospital departments.

2. Hand hygiene practice surveillance. Wards are sampled for HAI and staff in sampled wards are surveyed to monitor hand hygiene practices and their relation HAI. Monitoring and evaluation is done in collaboration with the Kenya Medical Research Institute and U.S. Centers for Disease Control in Kisumu, Kenya, on a monthly basis with results shared with sampled/surveyed wards and the hospital Infection Prevention Committee.

From December 2011 to May 2014, there has been an overall rise in the percent of staff practicing hand hygiene by ward with 49% of all staff in 2014 reporting use of hand hygiene practices, an increase of 7 % from 2012. By contrast infections have dropped during the same period, with HAIs decreasing from 12.9 per 100 admissions in 2012 to only 5.1 cases per 100 admissions in 2014.

Implications for global health nursing: Programs to continually measure HAIs and prevent infections are new to hospitals in developing countries. Despite changes in government and challenges with supply chains, the nurses at JOOTRH, a large urban referral hospital in Kenya, have demonstrated that well trained and motivated nurses can implement a successful program to reduce HAIs by improving staff hand hygiene across disciplines and departments. The success of this program suggests that nursing has a vital role in leading infection control efforts in low resource settings.

Clemens Hong Replied at 12:06 PM, 27 Jan 2016

@Rohit Ramaswamy Thanks for sharing the UFE framework. I think a key
stakeholder group is the community (e.g. the population you are accountable
for and/or the community at large, and/or community based partners). This
group is often more heterogenous than other stakeholders and meaningful
engagement can be hard, so they are too often left out. PCORI has driven
increased thinking in this space, but what are people doing to engage this
important stakeholder group through the entire process?

Erlyn Rachelle Macarayan Replied at 12:18 PM, 27 Jan 2016

I like the approach mentioned above on a utilisation focused evaluation. It fits as well our goal to make health care delivery people or patient centred. I have limited experience on this type of evaluation, but I assume that it would have the same principle in cases wherein, for example, as we do cost-effectiveness analyses or even cost-benefit analyses, we also have to take into consideration the "value" or perhaps "preference" that different stakeholders are placing on that intervention, as well as what is the acceptable threshold for each of these stakeholders. Another area I feel we haven't explored as much in the first question is on the opportunity costs associated by not having that intervention like hand hygiene and also of course, the uncertainties involved when we wanted to evaluate the intervention. Linking this to the utilisation focused evaluation, I can see that users will feel better ownership of the findings if they have been involved in determining these opportunity costs, as well as in identifying any uncertainties to be considered in the evaluation approach. These touches on one (or a few) of the steps on the 17 UFE framework to "make sure intended users understand potential methods controversies and their implications".

I also like the other points mentioned on the framework such as to "organise and present the data for interpretation and use by primary intended users: analysis, interpretation, judgment, and recommendations", and "prepare an evaluation report to facilitate use and disseminate significant findings to expand influence". The utilisation focused evaluation, therefore, also re-emphasises to us that the findings are not meant to direct the users' decisions, but should rather inform them of how they can address current challenges and assist them as they make the best choice to achieve a certain goal. I think this is also an area where knowledge translation tools or decision support tools play a role. These tools can only be useful to decision makers if it provides them options outlining to them the risks and benefits involved as they make a decision after an evaluation has been made - which then requires us to have a better understanding of the context, the relevant research questions, and the objectives of the evaluation as well as of that of the intervention, among others (which again links us back to the first point made in earlier discussions on the importance of context and touches on a couple more other points in the framework).

I, however, need a bit more clarity on this point in the 17 Step UFE framework on "Simulate use of findings: evaluation's equivalent of a dress rehearsal". I think it means that we also first have to foresee what are the expected results of our evaluation then find out whether those expected results will also be the same expected results our users or decision makers would also like to see. What is unclear to me is why this step comes before the next point to "gather data with ongoing attention to use". I would assume that the 17 steps should not be necessarily taken in a chronological order as I think in practice, some of our expected results change as we gather data and better understand what data we can get and those that we cannot and our expectations also do change as we move along the evaluation process. As such and correct me if my understanding might be different, but the framework works more like a spectrum and perhaps even a virtuous cycle reflecting a complex chain of evaluation steps and perhaps more feedback loops and mechanisms as we go about the process; hence, constant communications across the evaluator and the users or decision makers and other key stakeholders involved are given emphasis. I do agree that evaluation is not a simple or a linear process and thus, require its own resources making the evaluation itself a cost and an area that has to be made more available and accessible to resource-limited settings.

Some follow-up questions on my end are as follows: Considering that utilisation-focused research is how we approach our evaluations, how then should we communicate our messaging on the findings of our evaluations? How do we ensure that the way we interpret our findings manage the conflicting interests of the multiple stakeholders or potential users of the evaluation? As evaluators and users become more involved and as they interact more in the process, how do we go around our own biases as evaluators and ensure our own objectivity in our outputs and even in the interpretation of our findings?

Rohit Ramaswamy Panelist Replied at 12:54 PM, 27 Jan 2016

Elizabeth, thank you for sharing the abstract. Could you talk a little bit more about the evaluation design that you used ? Were you tracking reports of use of hand hygiene practices regularly over time ? How was this displayed, and was this data useful in continued improvement in hand hygiene practice ?

Rohit Ramaswamy Panelist Replied at 1:08 PM, 27 Jan 2016

Dear Clemens, thank you for your post and for pointing out that the community is a key stakeholder. One aspect of the hand hygiene QI intervention in the post-natal ward and NICU was parents' education, and emphasizing the need for parents to disinfect their hands before they handled their babies. We have not done a formal evaluation of this, but from the UFE perspective it would be interesting to think about how best to conduct an evaluation for parents that would be useful and usable. I think this is where the use of hand swabs could be very helpful - if we did a pre-post evaluation of pathogens on the hands of community members before and after appropriate hand washing, and showed them visually how the plates had significantly fewer colonies after hand hygiene, this could be a good strategy for visually demonstrating the effects of good hand hygiene. This is an example of where the evaluation design may not have the internal validity for researchers, but is perfectly adequate for demonstrating the effect of hand hygiene for one group of stakeholders. This is what we need to think about as we determine how best to evaluate QI projects.

Elizabeth Glaser Replied at 2:37 PM, 27 Jan 2016

We simply used the hospital infection control data , pre and post.
It would have been even better if we had had data on the cost of the program, both start up and maintenance, and the cost of HAI to obtain the cost per infection averted. Unfortunately we were unable to get funding to do that more extensive study .

If you can show that such a program not only reduces infections but saves money over a significant time period once it is up and running, then may be able to get people on board to support the initiative.

I am going to try to pull Milka into this conversation because this was her facility so she can say more about how the initiative has been working out over time.

Elizabeth

Rohit Ramaswamy Panelist Replied at 3:28 PM, 27 Jan 2016

Thank you Elizabeth

Rohit Ramaswamy Panelist Replied at 3:48 PM, 27 Jan 2016

Thank you Erlyn for your detailed comments and I am glad you found the UFE framework useful. You are absolutely correct that this is not a linear process but that it will need to iterate between the steps, though the steps are generally followed in sequence. However, Step 12 (Simulate use of findings) does precede Step 13 (Collect more data). The idea of Step 12 is that we collect some limited data to show stakeholders what the results are likely to be and to ensure that they will be useful before investing in more substantial data collection. Since the use of evaluation results is a primary goal for UFE, it is useful to first check whether the results will be useful before investing more resources in data collection.

Rohit Ramaswamy Panelist Replied at 4:13 PM, 27 Jan 2016

I wanted to follow up on the idea of doing a "dress rehearsal" before engaging in more detailed data collection. This is a good evaluation practice for QI projects in general. Many improvement interventions require small tests of change to determine whether they work before a larger, more rigorous assessment is carried out. Collecting small amounts of data to test hypotheses and then validating the hypothesis with a larger sample size is a viable and sensible approach. We are working on a project that provides safe water storage containers to communities in Ghana and in Burkina Faso. We conducted small feasibility and acceptability tests with different container prototypes by handing out 6-1 containers in a community. Now we have handed the most acceptable prototype to 50 communities and are comparing contamination levels with 50 control communities. But it would not have made any sense to conduct the more rigorous trial without first determining whether the containers would be used in the communities. These small sample tests and dress rehearsals are a good way of testing hypotheses and building knowledge about what works.

Rohit Ramaswamy Panelist Replied at 8:35 PM, 27 Jan 2016

Thank you for all your participation today and look forward to tomorrow's discussion. Tomorrow, we will dive deeper into what we began today : the question of data, and how much we should collect and when.

A/Prof. Terry HANNAN Replied at 8:59 PM, 27 Jan 2016

Rohit, should not the questions be that we need to collect all data, particularly at the point of care delivery in standardised formats and then use existing and new analaytical electronic techniques to analyse and confirm ideas and discover what we may not know yet? Terry
I have full text of the following of people like to email me at

1. Big Data and Biomedical Informatics: A Challenging Opportunity Riccardo Bellazzi
Department of Electrical, Computer and Biomedical Engineering, University of Pavia, ItalyYearb Med Inform 2014: http://dx.doi.org/10.15265/
IY-2014-0024
2. The ‘big data’ revolution in health. Accelerating value and innovation. for US Health System Reform Centre Business Technology Office. January 2013. P Groves, B Kayyali, D Knott, S Van Kuiken.

A/Prof. Terry HANNAN Replied at 9:02 PM, 27 Jan 2016

And one more:
Big Data in Healthcare Hype and Hope: DrBonnie360 (formerly Feldman Stakeholder Relations)
Authors: Bonnie Feldman
Ellen M. Martin
Tobi Skotnes
Date: October 2012

Rohit Ramaswamy Panelist Replied at 10:48 PM, 27 Jan 2016

Terry it depends on what we are trying to do. Clearly it is important to collect data that is critical to patient care in as systematic a way as possible at the point of care, though in low resource settings, this will need to be manual since CPOE systems do not exist. But for quality improvement initiatives such as hand hygiene, where the drivers of poor hand hygiene may vary by facility or by ward, the need for data is to identify where the problems occur and the need for evaluation is to assess whether solutions work in the local context and whether they can be generalized and scaled up. This might require the collection of project specific data that may not be part of an ongoing monitoring or reporting system. So once again the data and the evaluation approach needs to be tailored to the particular situation we are trying to improve.

A/Prof. Terry HANNAN Replied at 11:11 PM, 27 Jan 2016

I am in total agreement and if I created any ambiguity about that then that is my error. I think I was "trying to say" is that we design the data capture to suit the clinical care environment but all data captured must be available for analysis. Does this sound ok?

Rohit Ramaswamy Panelist Replied at 6:08 AM, 28 Jan 2016

Yes that makes a lot of sense. Systematic data collection of key indicators is critical to assess performance over time, but these indicators must be carefully determined to ensure that the data is correct, useful and is used. In addition, there must be the capacity for more focused as-needed data collection to test improvement hypotheses. Thank you for the clarification !

Gareth Parry Panelist Replied at 9:46 AM, 28 Jan 2016

Hello everyone ! Thank you for your on-going participation in the webinar follow up discussion, which has started to focus on data issues. This provides a nice link to our third question we raised in the Webinar:

Data for evaluation: what types of data, how much data, how do you collect it, who collects it?

There are many issues around data we could discuss, and I want to get us going by thinking about the close connection between data and quantitative measurement. Often times in implementation initiatives, the aim is to put in place an ‘Evidence-based’ intervention. That the intervention is ‘Evidence-based’ often leads to debate about whether ‘success’ is indicated by improved outcome measures, or whether success is indicated by improved process measures that indicate implementation has occurred. For example, in the Hand-washing case Rohit presented on Monday, is success decreased infection, increased hand washing, or something else?

How do you think about these issues – where should we focus the measurement, and what data will we need?

In addition, what happens when an intervention is not deemed completely “Evidence-based” – and we are using improvement methods to test a model? Where do we focus measurement in that situation?

Erlyn Rachelle Macarayan Replied at 1:51 PM, 28 Jan 2016

Dear Gareth, this is a very interesting set of questions, but I think I'd like to focus for now on your question on "whether ‘success’ is indicated by improved outcome measures, or whether success is indicated by improved process measures that indicate implementation has occurred". I would like to quickly argue that it is both or perhaps even more. In M&E, we can have input indicators to show us how we progress in enhancing the needed enabling factors to achieve successful implementation of our program, as well as the process measures or perhaps some intermediate results of the intervention as you have mentioned above. I look at these as more of those that reflect the achievement of the short-term objectives of the intervention. However, this will not be enough since we also need the outcome measures to examine whether it has really translated into achieving our long term goals. As such, I would rather choose seeing success in all stages rather than leaning just on one of these.

Gareth Parry Panelist Replied at 3:06 PM, 28 Jan 2016

Thanks Erlyn, I think your suggestions make a lot of sense. Pushing us further on this, we often face legitimate resistance from people at the point of care, who tell us that they cannot measure everything. What strategies do you think there are in prioritizing the most important measures to keep track of. With outcome measures, when implementing an 'evidence-based' initiative, the effect size from the original studies may not be huge, and may have needed a relatively large sample to detect it. When implementing the initiative locally, how do we deal with knowing when a change has occurred with much smaller sample sizes? What tools can we provide people at the point of care to help them?

A/Prof. Terry HANNAN Replied at 3:38 PM, 28 Jan 2016

Gareth,
Comment on: “we often face legitimate resistance from people at the point of care, who tell us that they cannot measure everything”. Modern technology allows us to capture almost ‘everything’ however with the implementation of measurement tools and data we need to “sit with the end users in the dirt, physically and metaphorically” and have them help us decided what they want and how the data capture system (UI) can do this efficiently and accurately. At all times proved timely feedback (sometimes weekly) to verify that the system of measurement is or is not working. In a parallel set of processes the ancillary data to the process is being measured so that when ‘answers’ are obtained from the first measurement system these will prompt new questions the users want answers on. I hope the above provides some answers to your “When implementing the initiative locally, how do we deal with knowing when a change has occurred with much smaller sample sizes?”
The tools must have most of the following capabilities.
• COLLABORATION:
• SCALABILITY / SUSTAINABILITY:
• FLEXIBILITY of the system for the end users:
• RAPID FORM DESIGN –preferably by the end users to facilitate data capture:
• USE OF STANDARDS:
• SUPPORT HIGH QUALITY RESEARCH:
• WEB-BASED AND SUPPORT INTERMITTENT CONNECTIVITY:
• LOW COST: preferably free/open source
• CLINICALLY USEFUL: feedback to providers and caregivers is critical. If the system is NOT CLINICALLY USEFUL it will not be used.
• Mamlin BW, Biondich PG. AMPATH Medical Record System (AMRS): collaborating toward an EMR for developing countries. AMIA Annu Symp Proc. 2005:490-4. Epub 2006/06/17.
I hope this adds value to these discussions.

Gareth Parry Panelist Replied at 3:59 PM, 28 Jan 2016

Thanks Terry - this sounds very interesting! Do you have any applied examples of places that have used the tools with the capabilities you outline, to understand the impact of an implementation initiative in low resourced settings? It would be great if you could share any links to reports or recent publications.

Erlyn Rachelle Macarayan Replied at 4:05 PM, 28 Jan 2016

Dear Gareth, this is a very challenging and tricky question. My answer might be wrong or perhaps there are many other better alternatives - but assuming that your frequency or the number of the observations is very low then perhaps "time" instead can be strengthened further. What I mean is that perhaps instead of working on large sample sizes which may not be always feasible due to sample constraints, I would then check if I can have resources to observe just a small sample size that I can follow over a period of time and seeing how that changes their behaviour/knowledge (or whichever parameter of output or outcome we are looking at) as they become more sensitised to an intervention across time. It will then be more of a longitudinal type of approach rather than a cross sectional one. Another alternative perhaps might be to instead lean towards "quality" over "quantity", which means that we dig deeper on how our intervention has changed a person's different attributes (knowledge, perceptions, behaviour) instead of just very narrowly focused areas by which we measure outcomes. In either case, we can still (I assume) get reliable findings. Worse comes to worse when both "time" and "quality" are constraints, then I guess my only option is to choose a very limited set of rigorous measure/s that is/are SMART (specific, measurable, available, reliable, time-bound), feasible and perhaps that measure which for the users themselves they feel is very much in alignment with the program goals and is responsive to their specific needs and contexts. The challenge then is in identifying what that rigorous measure is, which I guess has no one specific answer - it is always something that has to be put into context and consulted with the key players of the M&E.

Befirdu Jima Replied at 5:42 PM, 28 Jan 2016

Dear Gareth, questions you posed for this discussion are so inviting. Based on that I would like to forward some of my reflections below.

Regarding the point of debate in talking about success in implementation program, I am in complete agreement with Erlyn's idea. Based on her idea let me push a bit further. As long as we are looking for some sort of improvement (of course, in a given complex system), I think, it wouldn't be much plausible to think of and/or take 'success' independently either as an outcome or process measure. Both are inseparable. Outcome cannot happen without having a consistently & objectively measurable process put in place. Similarly process can lead us nowhere unless we clearly set an objective outcome that we go for in the first place. However, based on some justifications, we could carry out the evaluation activities by dividing into like process and outcome, just to have our work get done easily. In this case we need to tightly define what we mean by the success then. This is helpful when we are trying to minimize the sub-divisions of the outcome we are to measure. Having too many sub-divisions when we measure an outcome potentially lead the different evaluation team to understand and code the outcome differently. Hence, we need to tightly define our outcome assure for consistent and objective coding throughout our evaluation activities.

On your other point regarding the data requirement let me forward some of the points I think are helpful.

One is that our need of data would better strongly based the premise to ensure sustainable QI in the service delivery. Once we properly frame that we need to collect data on: what works; for whom it works; in what context it works; why it works the way it does; and how it works so, as we see it happening in the real world.

Second we need to focus on collecting data that will help us increase engagement between service users and providers. Because they are seedbeds for sustainability.

Third we need data on the concerns and choices of both service users and providers. In addition we need data on the point and levels of accountability among different stakeholders thought to have effect on efforts targeted service quality we are to improve.

Finally we need to collect data on factors driving both the success and failure of the issue at stake. In this case we do collect data by focusing on some important intermediate outcomes to seek for diversity in the success or failure driving factors. This makes internal comparison possible which indeed help avoid positive bias.

Thanks

Befirdu

Befirdu Jima Replied at 5:42 PM, 28 Jan 2016

Dear Gareth, questions you posed for this discussion are so inviting. Based on that I would like to forward some of my reflections below.

Regarding the point of debate in talking about success in implementation program, I am in complete agreement with Erlyn's idea. Based on her idea let me push a bit further. As long as we are looking for some sort of improvement (of course, in a given complex system), I think, it wouldn't be much plausible to think of and/or take 'success' independently either as an outcome or process measure. Both are inseparable. Outcome cannot happen without having a consistently & objectively measurable process put in place. Similarly process can lead us nowhere unless we clearly set an objective outcome that we go for in the first place. However, based on some justifications, we could carry out the evaluation activities by dividing into like process and outcome, just to have our work get done easily. In this case we need to tightly define what we mean by the success then. This is helpful when we are trying to minimize the sub-divisions of the outcome we are to measure. Having too many sub-divisions when we measure an outcome potentially lead the different evaluation team to understand and code the outcome differently. Hence, we need to tightly define our outcome assure for consistent and objective coding throughout our evaluation activities.

On your other point regarding the data requirement let me forward some of the points I think are helpful.

One is that our need of data would better strongly based the premise to ensure sustainable QI in the service delivery. Once we properly frame that we need to collect data on: what works; for whom it works; in what context it works; why it works the way it does; and how it works so, as we see it happening in the real world.

Second we need to focus on collecting data that will help us increase engagement between service users and providers. Because they are seedbeds for sustainability.

Third we need data on the concerns and choices of both service users and providers. In addition we need data on the point and levels of accountability among different stakeholders thought to have effect on efforts targeted service quality we are to improve.

Finally we need to collect data on factors driving both the success and failure of the issue at stake. In this case we do collect data by focusing on some important intermediate outcomes to seek for diversity in the success or failure driving factors. This makes internal comparison possible which indeed help avoid positive bias.

Thanks

Befirdu

A/Prof. Terry HANNAN Replied at 6:13 PM, 28 Jan 2016

Gareth, I am inviting you to join OpenMRS Talk where persons much more competent than I can respond to you and you8 can also add you contributions which can only add value to the process. https://talk.openmrs.org/
Also check http://www.ampathkenya.org/
AS EXAMPLES
I am also adding some documents that may help. For my education please let me know if these help.

Attached resources:

Sudesh Raj Sharma Replied at 7:00 PM, 28 Jan 2016

Hi all, I hope its not too late to participate. I have quickly browsed through the discussions and feel that I could also contribute by adding methodological perspective regarding systemic evaluation. In particular, evaluation is indeed complex and am particularly influenced by Midley's "Systemic Intervention" approach which promotes the concept of "Methodological pluralism" and "Boundary Critique". (Link: http://preval.org/files/Kellogg%20enfoque%20sistematico%20en%20evaluacion.pdf)

In simple words, our evaluation approach of a complex issue (I am framing Hand Washing Intervention within Health Care or community setting as complex issue as it is influenced by resource allocation, leadership, management, policy, community perspective, etc.) should be open to creatively mixing qualitative and quantitative methods from different disciplines (some key examples of methodological pluralism could include Dynamic Synthesis Methodology; Mix-method research approach, System Dynamics, etc.) which would help us to iteratively understand the dynamic complexity of a phenomena or program within a context. Boundary critique, on the other hand, compliments the pluralistic approach by emphasizing on involvement of stakeholders to identify the boundary of the phenomena or program in the evaluation process (incorporate beneficiaries perspective without marginalization).

So, basically, I just wanted to emphasize that our evaluation approach should by guided by "holistic" understanding figuring out the all the possible feedback mechanism (linking input, process, output, environment in a closed loop; precisely drawing a causal loop diagram or system map within the case context) and evaluating the impact of the intervention on the system leverage points (For example; Meadows has identified 12 Leverage Points and emphasized on intervening at high leverage points ; Link: http://www.donellameadows.org/archives/leverage-points-places-to-intervene-in...).

Gareth Parry Panelist Replied at 8:09 PM, 28 Jan 2016

Thank you Befirdu, whilst recognizing the close connection between process and outcomes, your idea of dividing the evaluation activities into process and outcome makes a lot of sense to me. Moreover, the question about what works, why it works the way it does; and how it works is, as you say, so important in real-world settings. We have described something similar in a paper recently - see open access link below.

In the above paper, we suggest a well described logic model can help point to the key process, outcome, and intermediary measures. We also argue, that for improvement, a more useful evaluation question is to ask “Where does a model work, or can be amended to work, and with what impact”. We have often found such ideas resonate with people, but they struggle to get funded, or publish evaluation approaches, being trumped by evaluations that as “Does it work in aggregate, and with what impact?”

If you have examples of the types of evaluations you describe Befirdu, it would be great to see them.

Attached resource:

Gareth Parry Panelist Replied at 8:14 PM, 28 Jan 2016

Terry - thank you - I will read through the links you kindly sent and get you my feedback.

Gareth Parry Panelist Replied at 8:30 PM, 28 Jan 2016

Sudesh, thank you for your post, I certainly welcome these thoughts coming now. I think your prompting towards systems thinking is very helpful. At the core of many improvement methods is, to quote Deming: An understanding of the theory of knowledge (why do we believe something works), An understanding of variation (statistics), an understanding of psychology (I would agree social science more broadly), and an understanding of systems (systems thinking). There are many improvement approaches, that encourage people to produce a map or diagram of the system in it's current state, and in a future "improved" state. In doing this, people can begin to map out both change ideas and identify key measurement points, etc... This can all get very complex, very quickly. Indeed, a process maps can often vary substantially across organization, which imply changes, and how to measure their impact will vary substantially across organizations.

Where this takes me, is that improvement or implementation plans often need to be highly contextualized, and so associated evaluations must take this into account too. The editorial below also touches on some of these issues.

Picking up on my earlier post, if you have any examples you can share where the approaches you describe have been published, that would be terrific!

Attached resource:

Pierre Barker Moderator Replied at 9:40 AM, 29 Jan 2016

Greetings All! This is Pierre Barker and I will be helping to facilitate discussions today. Today we turn our attention to the policy or health system issues that need to be accounted for in the evaluation design. This is where improvement designs meet the real world. Questions could include how does the study design fit with existing policies and standard practice? How to we assess the capability and functionality of those will undertake system improvement work (now and in the future)? How do we evaluate the sustainability of the intervention and associated change? And, very importantly, can your improvement work be scaled up within the existing resources. This last question raises issues of cost-effectiveness of the intervention. We'd love to hear from you - have you have successes or challenges in any of these areas. The GHD-Online community would love to hear from you!

Sudesh Raj Sharma Replied at 3:00 PM, 29 Jan 2016

Hi Gareth Parrey,
In most of my carrier in the field of health promotion, I was guided by the input-process-output type open model. We conduct baseline surveys, implemented interventions based on the synthesis of best practices and conducted endline surveys after few years of intervention. We then compare the baseline-endline data to highlight the numerical success of the project. We carried out process evaluation as well. But, there was always a sense of unease within me. This was primarily because most of the projects that I worked ignored the complex dynamics of the social and local systems in which the intervention would be implemented. The intervention had limited avenue for meaningful community participation (and empowerment). Those technical interventions didnot adequately focused on health system strengthening aspect including governance and coordinating with other sectors. Once the donor support for the project ended, the system reverted back to its original state after few year (I have personally visited some of the project site after few years to experience the reversal). Problem is systemic in developing world.
Again, my aim is not to make things complicated. But, real world is complex and we need complex way to understand it and deal with it. Systems thinking could be one of the key approach in evaluating (and re-planning) a complex intervention (or studying a complex problem) in public health.
The earlier resource that I had shared has some examples from Development Sector. Here are some of the examples from health sector:
http://download.springer.com/static/pdf/761/art%253A10.1186%252F1478-4505-12-...

http://www.systemdynamics.org/conferences/2008/proceed/papers/RWASH302.pdf

Gareth Parry Panelist Replied at 3:16 PM, 29 Jan 2016

Thank you Sudesh, to clarify - I completely agree that taking a systems approach as you describe is very helpful. I guess my main point was that we should systems vary substantially from place to place. This will mean the actual implementation will vary, and so we must pay attention to that when we design evaluations.

What do you think Pierre, as today's moderator?

Befirdu Jima Replied at 3:23 PM, 29 Jan 2016

Dear Gareth, I found "Qualitative Comparative Analysis (QCA)" that would help us understand evaluating complex interventions. Using this method particularly on the baseline assessments could help us set a focused designing.
"In impact evaluation for example the QCA helps to explore why some interventions were successful in achieving a particular outcome while others were not."

Using this method, "outcome achievements and casual factors are translated into a numerical format to carry out a systematic analysis of patterns across the data." In this case , I think, the method might be particularly used to translate qualitative variables into quantitative ones.
Further the QCA method enables us identify factors based on their strength in influencing the outcome in different contexts. This would help us figure out the points we need to focus to collect pertinent data on.

Thus QCA method could provide some useful concepts to contribute in building the practical evaluation design. But its importance in QI is to be tested.

How valuable this method in doing evaluation is can be found on this paper: CDI PRACTICE PAPER 13 January 2016.
Link: www.idc.ac.uk/cdi

Pierre Barker Moderator Replied at 3:30 PM, 29 Jan 2016

Dear Sudesh. You raise a very important issue. I think many of us use the Donabedian (structure - process-outcome) model as the bedrock of improvement, but you are so right that it does not, alone, address the complex issues that are at play when trying to improve performance of the system. As your paper on Uganda clearly shows, you need to tackle so many contextual issues if you want to see a change. As in your paper, we see the drivers of change almost always including some combination of leadership and management, resources, data systems, clinical knowledge, as well as teamwork. The task of a good evaluation design is to understand the role of these factors in improving performance. One approach is to take each of these drivers and apply a structure-process-outcome approach to assessing whether there is a cause and effect relationship: i.e. how did each driver contribute to improvement.

Pierre Barker Moderator Replied at 3:53 PM, 29 Jan 2016

Hi Befirdu - this is such an important area - I think we all struggle with evaluation of qualitative data - so this is potentially a very valuable resource. I was not able to open the link provided so I am proving this link for those who are interested in the paper http://www.ids.ac.uk/publication/qualitative-comparative-analysis-a-valuable-.... Some great case studies are included in this paper.

Befirdu Jima Replied at 4:06 PM, 29 Jan 2016

Dear, Pierre I am a bit being challenged by very slow connection which is playing a role in not allowing me do things as easily as I need to. As you could have scan the paper it has some important concepts.

Pierre, I am not sure if it is ok, could you please send me a copy of the last two resources linked in this discussion as I am not able to access them here in Ethiopia. Thank you.

Befirdu Jima Replied at 4:11 PM, 29 Jan 2016

Sorry for the inconvenience I am making
Here is the other one that I could not get access to.

http://www.academicpedsjnl.net/article/S1876-2859(13)00099-5/fulltext

Kaleigh Spollen Replied at 4:20 PM, 29 Jan 2016

Hello all!

I hope I am posting in the correct discussion thread.. I am a former Global Health Corps intern and was guided here by a colleage through that network. I am currently working on a QI project with the intent to robustly evaluate several different wellness groups (stress management, therapeutic writing, mindful eating, among others) at a community health center in rural California. I am interested in any and all resources that could help guide me in best practices for program evaluation -- especially any useful in researching participant outcomes for "rolling," drop-in groups in a rural setting.

Best practices, past experiences, and helpful tips are all very much welcome!

Pierre Barker Moderator Replied at 4:33 PM, 29 Jan 2016

Dear Befirdu - the resources you requested are now in the resources section

Attached resource:

Pierre Barker Moderator Replied at 4:35 PM, 29 Jan 2016

An here is the second paper

Attached resource:

Pierre Barker Moderator Replied at 4:50 PM, 29 Jan 2016

Hi Kaleigh - I am attaching a great overview on QI Evaluation authored by Gareth Parry - In addition to understanding the role of context, Gareth draws attention to the Kirkpatrick Framework which describes 4 levels of learning opportunity: 1) experience—what was the participants’ experience? 2)learning—what did the participants learn? 3) behavior—did they modify their behavior? and 4) results—did the organization improve its performance?. Your question about drop-ins is a tough one, but this is a common challenge in real world implementation projects. One approach is to use time series analyses for each new group or facility joining. Another is to line up all facilities to a "Time 0" rather than the calendar month when doing the analysis and then assess improvement using a before/after or a time-series analysis.

Attached resource:

Befirdu Jima Replied at 5:07 PM, 29 Jan 2016

Thank you Pierre for your concerned response. I already got both papers.

I hope to come back with some reflections on important issues regarding health systems or policy perspectives we need to account for when designing evaluations.

Sudesh Raj Sharma Replied at 5:10 PM, 29 Jan 2016

Thanks Gareth and Pierra. That is exactly what I was trying to say. Cheers.

Befirdu Jima Replied at 6:24 PM, 29 Jan 2016

Dear Pierre,

I think the greatest concern related to health system or policy issues when designing evaluation is more of insuring future sustainability of the intervention at stake. Because we need to scale up the intervention, integrate it into the wider functioning health system environment, and of course we are to transfer the intervention to be owned and run by stakeholders at local and (national) level. Hence we need more resources as well as more political space. To accommodate these growing needs we need designing the evaluation systematically considering the complex system where are to intervene in. Therefore our design need be proactive one that focus on leadership, financial, political feasibility/willingness/commitment (political sensitivity of the issue), stakeholders diversity, and civil society engagement with system governance, among others.

I found the article below valuable in its providing some core issues we need to focus. Notwithstanding the article is about monitoring and evaluation of large transactions in global health.
http://dx.doi.org/10.9745/GHSP-D-15-00221

Kaleigh Spollen Replied at 5:44 PM, 30 Jan 2016

Pierre -- Thank you so much for those resources and advice! Much appreciated.

Milan Gautam Replied at 6:45 AM, 31 Jan 2016

Thank you!

Paul Nelson Replied at 10:32 AM, 31 Jan 2016

Pat Riley (NBA Coach) has said: "Excellence is the gradual result of always striving to do better." And, Colin Powell (USA Army General and Secretary of State) has said: "If you are going to achieve excellence in big things, you develop the habit in little matters. Excellence is not an exception, its a prevailing attitude." As for hand washing, I think the "secret shopper" process works best. But, at the big picture level, Excellence does not achieve its prominence for healthcare unless paired with "Altruism, Trust, Collaboration and Transparency." Too many institutions FAIL because there is no arrangement to pair these VALUES with the professional development of its professional assets. Peter Drucker wrote a book about it published in 1993: "Post-Industrial Society." Its premise: For an information based corporation to prosper, it must attend to the capitol appreciation (as in assets) of its professional employees. So, how many healthcare institutions within our nation's $3.3 Trillion healthcare industry DO THIS ? And, why is it that there are, now, so many articles about physician burn-out? Especially, for Primary Physicians?
.
Arguably, the best student of Institutions in the last 50 years is Elinor Ostrom, Nobel Prize winner in 2009. She has said: "Without monitoring, there can be no credible commitment; without credible commitment, there is no reason to propose new rules." The word "credible", I think, comes first for Excellence. Its contribution to long-term outcomes for the efficiency and effectiveness of an institutions begins with its governance. Professor Ostrom along with many colleagues has defined the Design Principles applicable to an institution's goal to successfully manage a common-pool resource, as in a nation's economy. Given the maturity of a nation's economic development, there is still a limited commitment available for spending on the HEALTH of a nation by its healthcare institutions. See link below for a succinctly described statement of the Design Principles for spending a limited resource by a nation's institutions for serving the common good of nation's HEALTH.
.
If you sense that this is not real, I am aware that it is highly likely that the excess cost of our nation's healthccare represented 60% of the USA Federal deficit spending during 2015 alone or $300 Billion. Please note that all the other developed nations spend 12% or less of their national economy on healthcare. We spend 18% on a healthcare industry with a maternal mortality ratio that has annually worsened, as compared to all the other developed nation's of the world, for 20 years. Our nation's ability to solve this Paradigm problem would likely represent the basis for the future of the world. We face an uphill battle given the population explosion of the world's population from 7 Billion now to 10 Billion in 2050 (United Nations estimate). Hint: Healthcare Reform must be formalized beginning at the community level, community by community as promoted by a nationally supported semi-autonomous institution that is NOT associated with the economic processes of healthcare re-imbursement.

Attached resource:

Prisca Muange Replied at 5:58 AM, 1 Feb 2016

Dear Rebecca,

I would like to inquire if the webinar was recorded? If yes, is it possible
to share a link for those who missed the webinar?

Lisa Hirschhorn Panelist Replied at 6:25 AM, 1 Feb 2016

Hi Prisca
Many thanks for your inquiry. The webinar will be available and we are working on getting the information and link posted.
Regards

Pierre Barker Moderator Replied at 7:09 AM, 1 Feb 2016

Dear All. Many thanks for the vibrant discussion on QI evaluation designs last week. The recording of last week's webinar (with expert panelists Lisa Hirshhorn, Gareth Parry and Rohit Ramaswamy, moderated by Pierre Barker) that kicked off this discussion has now been posted. Here is the link https://attendee.gotowebinar.com/recording/7551063601674866690. A follow-up recorded discussion synthesizing the week's learnings will be posted on Health Systems Global. The link to that recording will be posted on this discussion board.

Attached resource:

Yudha Saputra Replied at 7:32 AM, 1 Feb 2016

Dear Rebecca,

I would like to express my profound gratitude to invite me in this rich and resource-full discussion. I also grateful for other professional fellow that already share their insight in this expert panel forum.

I would like to know, is development of mobile health application, software or other technology will enhance the probability to ensure people do their hand hygiene washing?

Will a simple reminder in their mobile phone can be considered as a factor that impacting health system?

I was joined an online project course about a year ago and (it may kind of out of topic) informations about pregnancy to people among rural area, can help decrease the mother and baby mortality number. A non profit organization called Question Box, use an IVR (Interactive Voice Response) to address illiterate among people in rural areas. Which then it has impact on improving knowledge of pregnant mother there. And if information about hand hygiene included there, will it consider as one factor that impacting health system? If yes, will it need to have a measurement tool (to measure effectiveness of the implementation), so we can track and improve it progress? In short and general, what kind of assessment or metric to include an invention that mean to improve healthcare as factor that have impact on quality of health system?

I apologize to ask many question. Generally, I’m just eager to know if technology, specifically on mobile technology, have any correlation with quality improvement of healthcare system.

Plus, I want to highlight what Paul Nelson said about monitoring. If there any possibility to develop a monitoring tool to monitored hand hygiene washing issue, will there any protection of data of the people we took from them? or is there any policy related to this? I’m definitely agree with monitoring, because from there, in my opinion, we will have chance to see through the progress, and may prevent what’s out of the trend quickly. Moreover, if we also included community health workers to join our movement, it will have bigger impact (e.g. mASHA, mSehat, MAMA). But, how about their data protection? Any resources would be very valuable

Again, thank you
Looking forward!

Regards,
Yudha
Indonesia

Attached resource:

Shubhesh Kayastha Replied at 7:50 AM, 7 Feb 2016

It gives me much pleasure to join an interactive communication with communities of expert panel.
Thanking you so much for providing such a wonderful opportunity .

Lisa Hirschhorn Panelist Replied at 10:47 AM, 11 Mar 2016

I am delighted to post the wrap-up discussion from the Practical Evaluation Designs for Improving the Quality of Health Care implementaion panel. In this discussion, the moderators have reviewed and have summarized the valuable input and discussions which you contributed to the initial event and follow-on.
The recording can be found here: http://www.healthsystemsglobal.org/blog/85/Practical-Evaluation-Designs-for-I.... We welcome any feedback and continuation of the productive interactions

Marwa Oraby Replied at 4:27 PM, 11 Mar 2016

I have missed the webinar would you please upload the conclusion of the topic

Lisa Hirschhorn Panelist Replied at 10:51 AM, 5 Apr 2016

This Expert Panel is Archived.

While this Expert Panel is no longer active, we invite you to review and recommend past replies and resources. Membership for this Expert Panel is closed, but we hope you'll join us in one of the many communities on GHDonline.

Panelists of Practical Evaluation Designs for Improving the Quality of Health Care Implementation and GHDonline staff