Implementation Strategies, Outcomes, Methods, Advances, and Challenges The following is a transcript of the presentation video, edited for clarity. ... Presentation Video
Clinical Practice Research > Implementation Science    >   Presentation Video
Free
Presentation Video  |   March 01, 2014
Implementation Strategies, Outcomes, Methods, Advances, and Challenges
Author Affiliations & Notes
  • Enola Proctor
    Washington University in St. Louis
  • Presented at the Implementation Science Summit: Integrating Research Into Practice in Communication Sciences and Disorders (March 2014). Hosted by the American Speech-Language-Hearing Foundation.
    Presented at the Implementation Science Summit: Integrating Research Into Practice in Communication Sciences and Disorders (March 2014). Hosted by the American Speech-Language-Hearing Foundation.×
Article Information
Research Issues, Methods & Evidence-Based Practice / Clinical Practice Research / Implementation Science
Presentation Video   |   March 01, 2014
Implementation Strategies, Outcomes, Methods, Advances, and Challenges
CREd Library, March 2014, doi:10.1044/cred-pvid-implscid2p2
CREd Library, March 2014, doi:10.1044/cred-pvid-implscid2p2

The following is a transcript of the presentation video, edited for clarity.

I was asked to help frame some of the most important questions in implementation research. And I approached this not so much from study questions, but also questions that I think a field should be addressing as it begins to move toward implementation science.
I will also talk about important concepts and measures, and then talk a little bit about some data sources for implementation research.
So I want to share with you some of what I think are six important questions that maybe your profession should think about tackling. I know my work in social work and my work in mental health services certainly focuses on some of these questions.
What is our repertoire of evidence-based practices?
The first question is: "What's our repertoire of evidence-based treatments or programs that are ready and suitable for implementation?" And yesterday my colleague, Matt, said, "Not everything that's evidence-based should be implemented." And I think we all know, unfortunately, that we have systems of care where programs that are not evidence-based are well entrenched, and continue to be delivered.
I've been interested to hear from your wonderful questions and the discussions we had yesterday about the progress in your field of developing evidence-based treatments, and evidence-based programs. So take stock and think about whether they are ready for dissemination.
Now, fields vary, and although I view myself as an implementation researcher at this point, I would not say that every field has the right balance of whether it should be emphasizing implementation, or continuing to develop new evidence for the programs and treatments. And those are not either/or -- every field needs to emphasize both. Certainly as researchers we know and hope that there will be new evidence in the future.
So I'm not in any way saying that we should stop the pipeline about evidence testing, evidence development, or effectiveness testing. But you might ask yourself, "Are there some areas that we don't have good evidence yet?" If so, then I would caution you to emphasize continuing to strengthen what you know about those programs.
But when we have effective interventions, certainly it's time to deliver them.
The latest research shows we really should do something with all of this research. Several of us work in the area of mental health, and we are sobered by the fact that about 10% of people with serious mental disorder receive evidence-based care. So certainly we're not done improving treatments, we're not done developing programs, but we have a ratio that needs to emphasize the rollout, the delivery, and the implementation of what we do know.
What is the implementation gap?
Then a very important question is asking how big is the implementation gap? What is the quality of service that we are delivering? To what extent are we providing, and to what extent are the individual's families and communities we work with receiving evidence-based care?
I cited that dreadful 10% level in mental health. What is it in your field? Do you know that? Do you know the proportion the people receiving speech, language, or hearing treatment that are receiving the best care, that are receiving evidence-based care?
There's a way to calculate this. It gets us to that very challenging concept in our research, the denominator. So we need to know how many people are receiving services as the denominator; in the numerator, how many are getting evidence-based programs and treatments.
And then if we want to be even more ambitious and take on a public health perspective, our denominator becomes even more challenging. We don't look just at the people receiving care, but at the people who need services. So what percent of that group is receiving evidence- based care?
Are there studies in your field that calculate this? And here we get at some of the infrastructure problems in research; particularly troubling in social work, and from some of my conversations with you yesterday I think also troubling in your field. We have to have procedure codes. And we have to have them readily available. Either we have to go out and ask people in kind of epi-studies, or we need to be able to look in some database and see some indication of the service that was provided, and those procedure codes need to be nuanced, and sensitive, and specific enough that we can tell one program from another, not just that they got our services. So this sounds like this is an important question for your field, and I would encourage you that this is a really important area to start an implementation research agenda.
What is the implementation context?
Now, another really important question is, "What is the implementation context?" And we got at this yesterday. Somebody asked a really good question, "Okay; I'm delivering services and somebody is improving, but insurance says services have to stop." So we talked about how the policy, the payment environment is sometimes out of whack with delivering quality or evidence-based services.
There are a lot of issues about context that I'm going to spend a little bit of time talking with you about: who are the key stakeholders, who and what are the policy and practice drivers. Greg set the stage for helping us think about readiness for change, organizational climate, and of particular importance, a settings' implementation history -- what are the prior and current barriers and facilitators?
I think one mistake implementation researchers often make is that when we're going to get on our horse and go out and implement something, we think we're the first game in town. But you know what, people have already implemented something; they have a history with implementation.
They may not have a history of participating in implementation research, but they have adopted something, they're holding onto certain things, and they've also changed, although we may think that the change we're bringing them is the newest sliced bread. But there is a history, and it's very shortsighted of implementers or implementation researchers to ignore or not capture that. I'm going to come back to the issue of context in a little bit.
What implementation strategies/processes are effective?
Another question, "What implementation strategies and processes are effective?" And this is really the area of implementation research that mobilizes me most at this particular moment in time. And there's a reason for that. I still remember the day that the strategies question was put to me. And it led to I think my very first conversation with David Chambers.
At our School of Social Work in the year 2000 -- much to my surprise because I was not on the curriculum committee, I had been working to strengthen the evidence-based for social work practice. But I'll never forget the May faculty meeting when the curriculum committee brought to our faculty a proposal that our school deliberately offer an evidence-based curriculum.
Somebody said, "But I'll have to shut down my course because there's not evidence-based intervention in this area." We said, "No, no; that's not what it means. It means we're going to be transparent about the extent of evidence. We're going to say, 'There's a lot of practice wisdom behind this. There's a lot of support for this, but here's the extent of what we know about research supporting this'."
So thereafter we tried to engage the stakeholders in our school community, a very important component of whom are our field instructors, because social workers have practicum. We use agencies as educational training grounds, and we try to work really hard to be sure that what we do in the classroom is aligned with what's happening in the field. So we knew that we had to take the very next step and engage field educators, our practicum instructors, our agency sites, and our conversation and our commitment to evidence-based social work training.
A very astute field instructor said, "Okay. I think this is really important. I get it. But tell me, what are the evidence-based strategies for me, an agency director, to increase my agency's delivery of evidence-based services?"
So David Chambers had just joined the National Institute of Mental Health, and he and I began a conversation. I wanted to ask, "Here's where we are in our school; here's where I am in my quest for implementation research. I really want to know, what do we know about strategy?" His short answer was, "We don't have the answer to that question quite yet."
So we've embarked on a whole set of efforts, this whole front row where colleagues we really enjoy working together in trying to build capacity around this. So we're still asking the field instructor's question: "What implementation strategies and processes are most effective?"
How do we support settings' capacity to implement multiple evidence-based approaches?
Then, of course, how do we support settings' capacity? This is a question we're just now beginning to tackle in the field. And this is a big hairy complex issue.
It's also a function of researchers' myopic views that agencies are interested in one new treatment, my treatment; you know, the program that I'm developing. So I want to take this treatment to this agency and say, "I have discovered something really great." Well, guess what, you know, they've -- the people they serve, the people that your profession serves, they have multiple problems. And until we get to a really specialized model of service delivery in our fields, where somebody does one intervention, and then we'll have a problem of leaving our patients and clients behind, because they have multiple problems.
So most organizations face the very challenging task of delivering multiple interventions, most of which they want to be high-quality and evidence-based. But they also have to face with the fact that something new is going to be evident tomorrow, or in two years; and that they have limited -- what we call "absorptive capacity,". They can't just endlessly take on more and more evidence-based practices, something has to get voted off the island, something has to get de-adopted.
Then of course there's the challenge of fitting interventions to local contexts. And staff turnover means that a lot of that infrastructure or capacity building that they do walks. And so there's a continuous process.
So the reality of agencies in striving to deliver multiple evidence-based interventions is this notion of it's not new for them, it's not the first time they've thought about adopting something, nor are they single-issue voters. They've got a lot going on.
How do we scale up and sustain evidence-based service?
And then finally a cutting edge issue is how do we scale up and sustain evidence-based service? We have so much research that even in settings where guidelines are tested and evidence-based practices are introduced, usually through grant funding, two, three years later usually those interventions are not continued to be delivered. So sustainment and then scale-up are really challenging problems.
Treatment evidence continues to grow.
So of those six questions: What's our repertoire of interventions; what's the quality of service for the denominators -- the people we're serving, the people we should be serving; what's the implementation context; what strategies are effective; how do we help implement multiple high-quality programs and services; and how do we sustain them. Those are the issues that I suspect the fields of speech, language and hearing are going to be tackling as our fields of mental health, child welfare, public health, and mental health services are also tackling.
You know, I am not a technology early adopter, but a lot of people are. A lot of you, as soon as the new iPhone is available, you're checking your plan to see how soon can I upgrade my phone, how soon can I go to the kiosk and get a new one?
Would that the field were that eager to adopt. But usually they are not. So I think most of the field may be delivering the first iPhone, at best, instead of the latest.
So what are some of the key constructs of implementation? There are many, many constructs from the frameworks that Greg provided you as a foundation. There are many constructs that need to be tackled. But I want to start with three big constructs and talk to you about where the field is, and provide you with some resources in the slides that I've provided for moving into measurement of those constructs.
So those three constructs are: the context of implementation, implementation outcomes, and implementation strategies.
Key Construct: Implementation Context
So around implementation context -- we all have our models so here's my model. Actually, this model was developed by our faculty team in the Implementation Research Institute. I know that David and Larry and going to plug TIDIRH, but the faculty team when we were preparing to launch our first NIH supported training program said, "We've got to figure out what's important. So together we developed this framework.
With treatment efficacy and effectiveness research we're very familiar with what's on the far left hand here. That is an evidence-based treatment, or that is a program. And it produces patient outcomes. So that's business as usual. We test an intervention for clinical outcomes.
But in implementation research, we have some new ingredients. We have, first of all, the how. Those are processes or strategies, the implementation strategies. It's in the same box here with the what because both of those are professional activities, they're programs, they're interventions, they require that we do something.
We also have a couple of other boxes of outcomes. The middle box are service system outcomes, that the IOM has said all health professionals better start paying attention to. These are things like the efficiency, the safety.
I think we've all had a family member who's gone in for surgery, and they write with a marker on the part of the body that's going to be operated on that says, "Operate here," so that we don't get the wrong knee replaced. So that concept of safety we grasp it, you know, how important that is. I work with anesthesiologists who talk about how important it is to make sure that the patient not wake up during surgery, and I thought, "Oh my --" I didn't even think of that as a possibility. But so there are a lot of safety issues that we're all over when it comes to physical medicine. But there are safety issues in our fields too.
There are issues of equity, disparities, patient centeredness, and timeliness. And I worry a lot about timeliness in social work and mental health in that people wait a long time before they come for help. I know that's true in your field too. Stigma, thinking why I ought to be able to manage this, this keeps people from coming quickly. And we are not very timely in the delivery of evidence-based or newly developed programs.
So in addition to business as usual -- testing an intervention for a patient outcome, the IOM says let's look at systems of care. And we had a great session yesterday on health services research.
There also are unique outcomes in implementation, and I'm going to talk about those in a few minutes, and why it's important that we have distinct implementation outcomes. So the heart then of implementation research is looking at this area, the implementation strategies, the service system outcomes, and the implementation outcomes.
Now, hybrid designs have meant it's not either/or. But I think my talk and a couple other talks later it may help you to know that we are talking about a different kind of research.
So then the context -- all of that was surrounded by context. So that's the first construct that I want to help unpack this morning.
There are many models.
Some of the things we need to figure out about service or treatment context are: who pays, who are the policymakers? There are administrators, there are researchers, clients, patients and families, front line providers, support staff. And we need to think where are each of these stakeholders when it comes to implementation; are they on board, are they happy with the status quo, who's chomping at the bit for change, and who's resisting?
I think the concept of stakeholders is an important way to begin to unpack what we mean by context.
There are sometimes demands to implement. Illustrating a very discreet strategy, an implementation strategy: -- I remember two or three years ago I was working with a hospital, the director of a certain kind of services that I won't name, and he was very excited to tell me that they were going to start implementing a new guideline on Monday morning in this unit. And I said, "Oh, that's great. Tell me about that, what have you been doing, what's your process?" And he was puzzled by that. And he said, "I'm issuing an email at 8:00. At eight o'clock Monday morning my email is going out. I am demanding that we deliver evidence-based care per this guideline at 8:00 on Monday morning."
So that's an example of a demand. Guess what, it really wasn't all that effective. It also highlights the limitations of a single discreet strategy -- a lot more is needed. But that's an example of a demand, and I know you probably work in settings too where there's a demand that we're going to start doing it this way.
Sometimes that comes from policy. I'm really intrigued in working with some colleagues nationally on the fact that federally qualified health centers are going to have to start integrating behavioral health services in primary care with the Affordable Care Act. So there is a policy demand.
Sometimes there's a push-out of a particular evidence-based practice. In mental health and social work we have several areas where people are developing treatments. Greg named some of them, KEEP, SafeCare, we have programs of treatment development in all of our fields. This is really, really important. And people who develop these interventions become people who work for very good reasons to try to push out and make available, and get adopted, and get scale up and spread of that intervention.
There's sometimes a pull from the ground up, sometimes patients, family members, advocacy groups, sometimes through lawsuits, sometimes through -- in child welfare, we have often our feet held to the fire by the press who uncover egregious events, so there's a demand.
So assess what's going on in the context to see what's happening with implementation.
I can skip over the CFIR framework and help you find some resources for measuring. This again is the consolidated framework for implementation, the CFIR.
So when we're doing our contextual assessment, we need to figure out the extent to which the practice change that we are proposing, or that we are demanding, or that we're studying, to what extent is that practice change aligned. These are some of the terms that you'll find in the literature to help you get into this.
A recent Flottorp article in the journal, "Implementation Science," which by the way, is an online open access journal. You can go to the website; all the articles are there. It's a terrific resource. It's very searchable. So Flottorp has a checklist approach for what he calls "practice determinants."
My colleague, Ramesh Raghavan at the Brown School at Washington University, talks about the policy ecology, we're aware of what an ecology is. And he says, "Let's apply that concept to practice."
And Karen Emmons talks about system antecedents, and agency or setting infrastructure. So in other words, we know we can't just airdrop a new practice into the real world without thinking of how it fits; is it aligned with, is it counter to what's going on in the practice context?
So I hope those terms will help you do some searches to get at some measurement of this really important area.
Key Construct: Implementation Outcomes
The second construct I want to talk about is implementation outcomes.
It's very important that we distinguish those from clinical outcomes, because we can have a very effective treatment, and if it's poorly implemented, we are not going to see the patient or client changes that are hypothesized. And if we skip over those implementation outcomes, we might say, "Oops, didn't work." We might say, "It didn't work," when in fact it was never adopted, it was never delivered with some granularity that these implementation outcomes will help us capture.
And then of course we have ineffective treatments that are very successfully implemented all the time. So these are different concepts.
So some of the major categories of implementation outcomes; a beginning list is listed here. Colleagues and I put out a paper in 2011 that we hoped would jumpstart the field to begin to pay attention to this notion, develop some measurement approaches, and advance our list of beginning implementation outcomes.
Acceptability -- to what extent is evidence-based practice acceptable to providers? What are their attitudes toward that? And I've been in situations where there's been real pushback from providers, "I don't like this." There are parts of treatment protocols, for instance, an exposure treatment, or parent training, that go against the grain, and people don't like to do that. Reliving some trauma exposure is a very difficult thing, and so we see a lot of clinician pushback saying, "I don't like that. I don't like that." Well, if a particular program or treatment is not acceptable, you're not going to get very far with the implementation.
Adoption is really the extent to which people do it. In our paper we distinguish this a little bit further -- some of these are attitude, some of these are behavior, some of them refer to providers, so I refer to that paper for more detail. But do people really start delivering it?
Without procedure codes, many of us in our fields are left at this point with asking people, "Are you delivering evidence-based practice?" And guess what, they're going to say yes. Or if we say, "Are you delivering this," of course they are. But when you begin to look under the hood, you see that in fact it may not be quite the case.
We talked yesterday about feasibility; that many new treatments and programs are developed without having been really designed for dissemination, or designed for implementation. Brian Mittman talked about this a little bit yesterday. If they're clunky, if they're the Cadillac model, if they take thousands and thousands of dollars in training, and coaching, they may not be very feasible for certain practice settings. So this, again, is going to be a roadblock to implementation.
Fidelity: I think fidelity measurement is probably a big deal in your field, as it is in mental health.
Implementation cost is something that's really not looked at as much as some of these other constructs. My colleague, Ramesh Raghavan is trying to develop procedures for costing what it takes to start a new program. First of all you have to take people offline. They're out of clinical care so there's lost billable hours. There may be training costs. Many programs and treatments have specific outcome monitoring requirements, so there may be costs associated with that. And then there's generally the cost of retooling. Whenever we start doing something new, we have to allow some time, some energy, some turnaround that we don't just keep perking along.
Penetration is a concept that really gets at the nuances of adoption. Penetration refers to the extent to which a new evidence-based practice is really filtering through and down into the system.
So let's take, for example, a setting where there may be 20 therapists. And say that's in one organization or one clinic. If you call them up and say, "Are you delivering this evidence-based treatment?" the answer may be, "Yes." So when we begin to unpack that, how many of those 20 therapists are delivering the evidence-based practice? That's another level of analysis.
Yet another one is within their own caseload, what proportion of clients who could benefit from this evidence-based practice are receiving it? So akin to the Shortell levels of change in a slide that Greg had up, penetration means that we have to take a more granular look, a more specific look, at the delivery of an evidence-based practice throughout the levels of a service organization.
And then finally sustainability is a challenging outcome, and one that David Chambers is going to talk in greater depth around.
So what we do know about measurement of implementation outcomes?
Well, first of all we know that fidelity is one of the most frequently measured implementation outcomes. It's something that we know how to capture. When we think of stakeholders, treatment developers and researchers are really invested in that. They want to know if something is being delivered with the optimal fidelity. So there's been a lot of attention focused on that.
We also know that provider attitude towards evidence-based practice have been pretty frequently assessed.
We also know that implementation outcomes are interactive, and that what happens in one of those outcome domains is really related to other implementation outcomes.
For example, I was involved in implementing collaborative care for depression among depressed older adults who were on Medicaid, with functional impairment, and high medical comorbidity. And the case managers in the Division of Aging said, "We know our clients are depressed, but who wouldn't be? And their depression is pretty intractable. Nothing is going to help." Well, when we took problem-solving therapy, did some behavior activation, guess what, the clients really did respond. Yes, they were still poor and still sick, but they were not as unhappy every moment of their waking day. So when the case managers saw that this treatment could be effective, well they were much more excited about it and they were much more accepting of bringing that program into the organization.
We also know the high cost reduces the perceived feasibility of implementing something new. And my conversations with agency executives say when they think about adopting something new, they say, "How much is it going to cost, and how much havoc is it going to wreck with my service delivery system?" So these implementation outcomes need to be measured. We need to take the temperature of where the setting, the stakeholders are, by measuring these, and then that can inform where we direct those implementation strategies, and how we begin to think of phases and implementation processes.
There are some developments; help is on the way, as they say, around measuring implementation outcomes.
There is what started as a Seattle Implementation Research Conference, or SIRC, funded by an NIMH grant. A group has been working hard to catalog and put online a repository of measures around both our 2011 framework and the CFIR.
And the National Cancer Institute jumpstarted a repository for implementation outcomes called "The Grid-Enabled Measures," or the "GEM,". I would encourage you to check either of these two websites for measures around this important construct of implementation outcomes.
Key Construct: Implementation Strategies
The last construct that I want to focus on is implementation strategies.
Implementation strategies are really the "how to" component of changing healthcare practice. I love Carolyn Clancy's, the former executive director of AHRQ, characterization of implementation research. She said, "Implementation research is how to make the right thing to do the easy thing to do." And we know that often the right thing is often not easy. So that's our job as implementation researchers to figure out how to make it easier, if not easy. So it's really the key to how to make the right thing to do easier.
Greg already provided the definition.
And he talked about the fact that the strategies vary in complexity. I think it's good to take you back to this distinction, however, because for instance, my example of the email edict to implement the new guideline is a very discreet strategy. Greg's illustration of the ARC model, with 12 steps, 4 phases, 3 major levels of change, that's an example of a very complex strategy.
So, this citation provides a little bit more detail about some of the major types of strategies.
There are a lot of planned strategies which involve gathering information, building buy-in, initiate leadership, and we're all eagerly awaiting Greg's evidence tested leadership development models, and developing relationships. And I think woe to any implementer who skips over some of these planning strategies.
We're very familiar with educational strategies. We're here, right, in a conference. We do in-service training. We provide CEU's. We develop materials. We develop manuals, protocols. So these are really our traditional ways of implementing new practices.
We also see a lot of action around restructuring. Roles sometimes have to get revised; new teams. Sometimes the service delivery site is changed. For instance, I referenced one already with the Affordable Care Act's mandate that federally qualified health centers begin to deliver integrated mental health services, and behavioral health services within primary care. So that's a prime example of restructuring federally qualified health centers, restructuring primary care. And guess what, the specialty mental health providers are not all that excited about it so they're getting shook up too. So when we restructure roles, that is sometimes a very powerful and often effective, and in many cases necessary way to implement something new. That usually means that some record systems have to be changed, and that communication protocols needs to be changed.
There are financial strategies. Social workers wish there were more financial strategies in our field. We wish somebody would pay for services to vulnerable people who need case management, community organizing, and all of that. But particularly in healthcare we see a lot of great examples of strategies that involve pay for performance, incentive-based approaches. And in mental health care we are seeing some big change with parity where if mental health -- behavioral health services begin to get paid for, we're hoping that will facilitate the implementation of evidence-based care.
Quality management strategies; we have a lot to learn from the field of healthcare in quality management, where audit and feedback. We have a lot of evidence that simply counting what happens and telling you how you behave changes your behavior. So audit and feedback is very, very helpful. Clinician reminders; those can take the form of pop-ups in electronic records, or sometimes if there are not electronic records, people are using cards, or checklists, which a provider can use. Technical assistance, cyclical tests have changed.
And then there are policy strategies which involve licensure, accreditation, certification, and liability. We've seen a lot of cases in mental health where certain kinds of care have been legislated or mandated, often out of court, class action suits.
So what do we know about the effectiveness of these strategies?
Well, first of all we know that passive strategies are really not very effective. Reading an article -- who knew that if people read our articles they just wouldn't go out and start behaving differently. And issuing memos.
Training is the most frequently used strategy, but we're learning that multi- component strategies are most important.
I don't know about you, but I don't really like having somebody tell me my behavior needs to change. And I think if we think about providers in that regard, who would like to have somebody come in and say, "Oh, please, start doing it this way"? Usually our behavior works, right? It's worked so far so why change it?
So to change we've got to tackle those forces of inertia, those positive payoffs that we're getting, and a lot of stuff has to be shifted. And that I think is kind of an individual window into why multi-component and multi-level strategies are most important.
Now, I have a beef with the current literature about implementation strategies. And that is that it doesn't give us as much detail as we need. Things are named, things are tested, but they don't tell you who does what, when, where, for how long and how much. And just as we know that treatments have to be specified, and need protocols, and manuals, and measurement, and dosage, we have to begin to have these same expectations and needs for implementation strategies.
So our team just published a paper calling for better reporting and implementation strategy articles specifying the actor, the action, the timing, the dosage, and the target: why were you doing restructuring, what part of the context, or which level of service delivery were you targeting? This paper calls for more specification, more reporting of implementation strategies, akin to what we are now used to expecting with treatments.
And Brian was with us yesterday, but he's given a couple talks at our Implementation Research Institute where he says that implementation strategies should be theory-based, presented with a logic model. We need to begin specifying the mechanisms of change for implementation strategies, multifaceted and multilevel if appropriate, feasible and acceptable, compelling, observable, sustainable, scalable. So these are some of the additional characteristics that we need to begin expecting of implementation strategies.
Data Sources
So I'm going to close with just a few observations about the kind of data that are used in implementation science, the kind of data you can begin to identify and try to use.
There are a number of participants in implementation. So on the left-hand side here we have -- these are some of the participants, and these are some of the data sources.
For administrators we often survey them, or ask them through key informant interviews; similarly with supervisors, front line providers. Support staff we see a lot of focus groups about what's going on, and service users. And there is an interesting approach called group model building, which is really a quantitative structured approach to focus groups where we illicit the input and the suggestions and kind of run out simulations of what -- if we did that, what would happen? And what would be -- yesterday we were reminded to think of the negative things, what can go wrong, because some of it probably is going wrong, adverse events. So group model building can help us lay all that out and get a picture of how we might begin to better understand the complexities of implementation.
Implementation processes are often studied through ethnographic observation. People can go in and hang out, they can use these qualitative approaches. Greg said that he became a convert to more mixed methods in qualitative research when he realized that a lot of the action on the ground, a lot of what's happening in the context is not as readily captured through administrative records, or readily available quantitative data.
If we're looking for the footprint, or the impact of implementation, in other words, did practice really change, then we usually have to go into document review, is there evidence that less of something was used, and more of something new began being used; charts, records, board notes, you know, did an executive director take a decision that we're going to go down this new path to a board, and budget line items. Many people say, "If it's not being paid for, it's probably not being implemented." So if we can begin to look at agency or provider budgets, we may get a sense of what's going on.
So these are just some of the potential data sources; again, with qualitative and quantitative data.
References
Chambers, D. A., Glasgow, R. E. & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8(1), 117 [Article] [PubMed]
Chambers, D. A., Glasgow, R. E. & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8(1), 117 [Article] [PubMed]×
Damschroder, L. J. Aron, D. C. Keith, R. E. Kirsh, S. R. Alexander, J. A. Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(1), 50 [Article] [PubMed]
Damschroder, L. J. Aron, D. C. Keith, R. E. Kirsh, S. R. Alexander, J. A. Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(1), 50 [Article] [PubMed]×
Emmons, K. M., Weiner, B., Fernandez, M. E. & Tu, S. (2012). Systems antecedents for dissemination and implementation a review and analysis of measures. Health Education & Behavior, 39(1), 87–105 [Article]
Emmons, K. M., Weiner, B., Fernandez, M. E. & Tu, S. (2012). Systems antecedents for dissemination and implementation a review and analysis of measures. Health Education & Behavior, 39(1), 87–105 [Article] ×
Flottorp, S. A., Oxman, A. D., Krause, J., Musila, N. R., Wensing, M., Godycki-Cwirko, M., Baker, R. & Eccles, M. P. (2013). A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implementation Science, 8(1), 35 [Article] [PubMed]
Flottorp, S. A., Oxman, A. D., Krause, J., Musila, N. R., Wensing, M., Godycki-Cwirko, M., Baker, R. & Eccles, M. P. (2013). A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implementation Science, 8(1), 35 [Article] [PubMed]×
Grimshaw, J. M., Shirran, L., Thomas, R., Mowatt, G., Fraser, C., Bero, L., Grilli, R., Harvey, E., Oxman, A. & O'Brien, M. A. (2001). Changing provider behavior: An overview of systematic reviews of interventions. Medical Care, 8(Suppl 2), II2–II45
Grimshaw, J. M., Shirran, L., Thomas, R., Mowatt, G., Fraser, C., Bero, L., Grilli, R., Harvey, E., Oxman, A. & O'Brien, M. A. (2001). Changing provider behavior: An overview of systematic reviews of interventions. Medical Care, 8(Suppl 2), II2–II45 ×
Grol, R. & Grimshaw, J. (2003). From best evidence to best practice: Effective implementation of change in patients' care. The Lancet, 362(9391), 1225–1230 [Article]
Grol, R. & Grimshaw, J. (2003). From best evidence to best practice: Effective implementation of change in patients' care. The Lancet, 362(9391), 1225–1230 [Article] ×
Powell, B. J., Mcmillen, J. C., Proctor, E. K., Carpenter, C. R., Griffey, R. T., Bunger, A. C., Glass, J. E. & York, J. L. (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review, 69(2), 123–157 [Article] [PubMed]
Powell, B. J., Mcmillen, J. C., Proctor, E. K., Carpenter, C. R., Griffey, R. T., Bunger, A. C., Glass, J. E. & York, J. L. (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review, 69(2), 123–157 [Article] [PubMed]×
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C. & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34 [Article] [PubMed]
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C. & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34 [Article] [PubMed]×
Proctor, E. K., Powell, B. J. & Mcmillen, J. C. (2013). Implementation strategies: Recommendations for specifying and reporting. Implementation Science, 8, 139 [Article] [PubMed]
Proctor, E. K., Powell, B. J. & Mcmillen, J. C. (2013). Implementation strategies: Recommendations for specifying and reporting. Implementation Science, 8, 139 [Article] [PubMed]×
Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R. & Hensley, M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76 [Article] [PubMed]
Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R. & Hensley, M. (2011). Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76 [Article] [PubMed]×
Raghavan, R., Bright, C. L. & Shadoin, A. L. (2008). Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science, 3(1), 26 [Article] [PubMed]
Raghavan, R., Bright, C. L. & Shadoin, A. L. (2008). Toward a policy ecology of implementation of evidence-based practices in public mental health settings. Implementation Science, 3(1), 26 [Article] [PubMed]×
Tabak, R. G., Khoong, E. C., Chambers, D. A. & Brownson, R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine, 43(3), 337–350 [Article] [PubMed]
Tabak, R. G., Khoong, E. C., Chambers, D. A. & Brownson, R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine, 43(3), 337–350 [Article] [PubMed]×
Further Reading: Websites of Interest
CFIR Research Team, Center for Clinical Management Research . (2015). CFIR Technical Assistance Website. cfirguide.org (Referenced as CFIR WIKI http://wiki.cfirwiki.net/ in the presenter's full PDF slides. Note the change in name and location of this resource).
CFIR Research Team, Center for Clinical Management Research . (2015). CFIR Technical Assistance Website. cfirguide.org (Referenced as CFIR WIKI http://wiki.cfirwiki.net/ in the presenter's full PDF slides. Note the change in name and location of this resource). ×
National Cancer Institute. (2012). Grid-Enabled Measures (GEM) . Behavioral Research: Cancer Control and Population Sciences (Available from the NCI website at cancercontrol.cancer.gov).
National Cancer Institute. (2012). Grid-Enabled Measures (GEM) . Behavioral Research: Cancer Control and Population Sciences (Available from the NCI website at cancercontrol.cancer.gov). ×
Society for Implementation Research Collaborative (SIRC). (2015). The SIRC Instrument Review Project (IRP): A Systematic Review and Synthesis of Implementation Science Instruments. SIRC: Initiatives (Referenced as Seattle Implementation Research Conference Measures Project / www.seattleimplementation.org/sircprojects/sirc-measures-project in the presenter's full PDF slides. Note the name change of this organization).
Society for Implementation Research Collaborative (SIRC). (2015). The SIRC Instrument Review Project (IRP): A Systematic Review and Synthesis of Implementation Science Instruments. SIRC: Initiatives (Referenced as Seattle Implementation Research Conference Measures Project / www.seattleimplementation.org/sircprojects/sirc-measures-project in the presenter's full PDF slides. Note the name change of this organization). ×