Five Basic Questions of Knowledge Transfer Perspectives of a basic science researcher on the implementation science pipeline. Presentation Video
Clinical Practice Research > Implementation Science    >   Presentation Video
Free
Presentation Video  |   November 01, 2013
Five Basic Questions of Knowledge Transfer
Author Affiliations & Notes
  • Ray Kent
    University of Wisconsin-Madison
  • Presented at the ASHA Convention (November 2013).
    Presented at the ASHA Convention (November 2013). ×
  • Originally presented as part of the session "Implementation Science: Evidence into Practice." Part 1 of this panel session is available in the CREd Library at Implementation Science: An Applied Researcher's Perspective.
    Originally presented as part of the session "Implementation Science: Evidence into Practice." Part 1 of this panel session is available in the CREd Library at Implementation Science: An Applied Researcher's Perspective.×
Article Information
Research Issues, Methods & Evidence-Based Practice / Clinical Practice Research / Implementation Science
Presentation Video   |   November 01, 2013
Five Basic Questions of Knowledge Transfer
CREd Library, November 2013, doi:10.1044/cred-pvd-c13017
CREd Library, November 2013, doi:10.1044/cred-pvd-c13017

The following is a transcript of the presentation video, edited for clarity.

I am glad to be here and to speak on this topic, which I think is extremely important. My disclosure slide, which like most disclosure slides is not terribly interesting, is adorned with a little quote at the bottom.

Sometimes it isn't obvious until it is obvious that it is obvious.

We've known for years across a variety of clinical specialties that many basic discoveries are not effectively translated or transferred to the clinical environment. Sometimes discoveries are not translated at all, and sometimes when they are it can be a matter of years, even decades, before there is a successive transfer. Of course, this is a huge problem that we have to deal with.
This is a box diagram, which like many box diagrams is partly accurate. But what I've tried to do here is to capture some of the components that are involved in this process of implementation science.
We have basic research over there. And you'll see that all of the different boxes are connected by arrows of different colors. We have basic research, clinical research, patient-oriented research or practice-based research, and population-based research. They all represent certain phases or stages of this implementation process. The color coding of the arrows, as you can see, indicates that what we hope for is a successful translation in which information, knowledge from one box is transferred and implemented in another.
However, there is some information that really should not be transferred right now. It's not to say they're bad ideas, but rather, that they're not ready for prime time. They simply wouldn't be an effective use of resources to try to transfer them at this point.
We also have, of course, the situation in which there is a missed opportunity, we don't have translation of knowledge that seems to have a substantial benefit -- it just never got around to translation.
So we have this issue, this three-prong dilemma facing us between each one of these boxes. How do we transfer the information?
I've been asked to talk specifically about the perspectives of a basic scientist. So I'm living in that box called "basic research" there. But again, I don't think we should view basic research as being isolated from this entire process. First of all, basic research must be cognizant of parallel work going on in other fields. This knowledge transfer is part of the interdisciplinary burden that a scientist has. We don't need to reinvent the wheel because many of the things that we're puzzling over have been solved totally or partially in other fields. We have to have an effective transfer of information from one discipline to another. But also the basic scientist, I think, has to have that long-term horizon view, looking down across those other boxes to see what's happening there.
I'm going to address five basic questions of transfer, and these were deliberated in two recent articles. One by Lavis and colleagues in 2003, another by Grimshaw and colleagues in 2012. The questions are: what, to whom, by whom, how, and to what effect.
These are the questions that have to do with how we conceptualize implementation science.
What should be transferred?
Beginning with number one: What should be transferred? As I indicated earlier, in my view, basic research is not just an isolated upstream component in the process. It really has to be connected very thoroughly with the other components.
Basic research is concerned with knowledge utilization. That includes interdisciplinary knowledge, reaching out to other fields of study. If we do this, we'll have the most valid and robust science. One that is authenticated not just within our own discipline, but is authenticated by the work in other fields, as well.
We also have to keep in mind the long-term product in all this. Eventual transfer could include things like decision aids for patients. What can we do to make their lives easier as they choose among different treatment options Clinical practice guidelines for healthcare professionals. Providing things like on the ASHA Practice Portal, advice to guide clinicians as they begin to think about what might be the most effective treatment. And finally, actionable messages and policy briefs for policy makers. We're interested in getting the ear of people who make decisions about funding, about financing, whether it's an insurance company, whether it's the federal government, whether it's a social agency. These are all aspects of what should be transferred.
To whom should research knowledge be transferred?
Next is: To whom should research knowledge be transferred? As Leslie indicated, we have a number of potential stakeholders, and they're not homogeneous in their characteristics. As you can see on this slide, the effort is to say, in each one of these cases, each stakeholder, there is a potential facilitator and a potential barrier. What we need to do for each one of these -- for example, patients -- is determine what can be a facilitating agent to accomplish this knowledge transfer, and what can be a barrier. And how can we address those most effectively? This requires a pretty broad view of the problem of implementation.
By whom should research knowledge be transferred?
The third question is: By whom should research knowledge be transferred?
Many people argue, and I agree with this, the basic unit of knowledge translation usually should be an up-to-date, systematic review or other synthesis of research. This would bring together various studies on a problem and offer a balanced interpretation and assessment of those results. Therefore this review or synthesis would include the nature and strength of the evidence, for example, levels of evidence. It would include the potential for implementation -- imagining some of the barriers and facilitators we talked about moments ago. And also the risk of bias. One of the problems, even in the case of synthesis by presumably neutral, objective individuals, is bias can creep in.
One reason why I think we really need to have the cooperation of several different types of scientists in the hope of balancing one another's biases when it comes to any particular issue. For that reason I think basic scientists can and should be part of the team that conducts these reviews and syntheses.
How should research knowledge be transferred?
The fourth is: How should knowledge be transferred? One of the interesting areas to look for here, an interesting resource, is Promoting Action on Research Implementation in Health Services Framework from Kitson and colleagues. What they are trying to do is imagine ways of enabling the implementation of evidence-based practice.
As you can see from the slide, this is a conceptual framework that emphasizes three key elements that are thought to be important in the successful implementation of evidence-based practices. The first, evidence bases (research and others) -- not just research evidence, but also information about values of clients, for example, of patient/client values. Context (the environment or setting). Taking a careful look at that environment in which we hope to accomplish this knowledge transfer. And finally, facilitation. Looking at the support factors that we can use, that we can capitalize upon in order to make that transfer successful.
What effect should transfer of research knowledge have?
Finally, number five: What effect should transfer of research knowledge have?
Well, Helfrich et al., in Implementation Science a couple of years ago indicated there were three primary aspects of a successful implementation. So, once we've begun to put this process in place, how do we know whether it worked? These are three major ways. First, realization of the implementation plan or strategy. Making sure that it was, in fact, accomplished by the environment that we wanted to see it accomplished in. Second, achievement and maintenance of the targeted evidence-based practice. Being sure that is, in fact, followed by clinicians and it is done so in a coherent, continuing fashion. And finally, achievement and maintenance of endpoint patient or organizational outcomes. These become kind of key features for the implementation.
Now these five points that we have gone through are ways of accomplishing implementation science. But they also suggest the ways in which we can evaluate implementation science. Implementation science, like anything else, needs to have an evidence appraisal. We would like to know that implementation science works. So these five features that are important in the actual implementation become some of the features by which we can evaluate whether implementation science has worked.
So, a quick summary, my view as a basic researcher is, this requires a collaboration of all sorts of investigators including basic research -- even for some of those discoveries that don't necessarily have their genesis in the basic research laboratory. They might have arisen from clinical observation. But it's important to bring in the basic researcher to be part of the team that's involved in this process.
References
Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J. & Squires, J. E. (2012). Knowledge translation of research findings. Implement Sci, 7(1), 50 [Article] [PubMed]
Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J. & Squires, J. E. (2012). Knowledge translation of research findings. Implement Sci, 7(1), 50 [Article] [PubMed]×
Helfrich, C. D., Damschroder, L. J., Hagedorn, H. J., Daggett, G. S., Sahay, A., Ritchie, M., Damush, T., Guihan, M., Ullrich, P. M. & Stetler, C. B. (2010). A critical synthesis of literature on the promoting action on research implementation in health services (PARiHS) framework. Implement Sci, 5(1), 82 [Article] [PubMed]
Helfrich, C. D., Damschroder, L. J., Hagedorn, H. J., Daggett, G. S., Sahay, A., Ritchie, M., Damush, T., Guihan, M., Ullrich, P. M. & Stetler, C. B. (2010). A critical synthesis of literature on the promoting action on research implementation in health services (PARiHS) framework. Implement Sci, 5(1), 82 [Article] [PubMed]×
Kitson, A., Harvey, G. & Mccormack, B. (1998). Enabling the implementation of evidence based practice: A conceptual framework. Quality in Health Care, 7(3), 149–158 [Article] [PubMed]
Kitson, A., Harvey, G. & Mccormack, B. (1998). Enabling the implementation of evidence based practice: A conceptual framework. Quality in Health Care, 7(3), 149–158 [Article] [PubMed]×
Lavis, J. N., Robertson, D., Woodside, J. M., Mcleod, C. B. & Abelson, J. (2003). How can research organizations more effectively transfer research knowledge to decision makers? Milbank Quarterly, 81(2), 221–248 [Article] [PubMed]
Lavis, J. N., Robertson, D., Woodside, J. M., Mcleod, C. B. & Abelson, J. (2003). How can research organizations more effectively transfer research knowledge to decision makers? Milbank Quarterly, 81(2), 221–248 [Article] [PubMed]×