Thursday 19 January 2017

Nursing Terminology

at some point an number of years ago, i decidedthat what i want to do when i grow up is to work to try to increase the quantity and qualityand value of implementation science. this was based on some experience within thedepartment of veterans affairs and the va queri initiative and its work to increasethe impact and the uptake of research to improve veteran healthcare. and i recognized througha series of conversations with colleagues that the clinical research is not enough.these findings and innovations are not self-implementing, there is a science of implementation. andthat science, as i indicated, needs a fair amount of effort to increase, as i said, boththe quality as well as the quantity. as a consequence, each time i'm given an opportunityto speak and participate and contribute to

a meeting like this, it's a chance that ijump at. obviously the close proximity to my home basein los angeles, as well as the beauty of the setting were factors as well. but i do appreciatethe opportunity. my task is a little bit different, and someonemore mundane i suppose, than that which dennis was given. and that was to provide a set offrameworks and ways of thinking about what is implementation science, why is it important,how do we go about thinking about the design and conduct of implementation studies. and perhaps more importantly, what is it aboutthe field in the way that we are conducting our implementation studies that has left usin a situation of somewhat limited impact.

we have a very large body of work, but i thinkit's fair to say that if we take a half-full versus half-empty sort of perspective on thefield, we're very much on the half-empty side. we have much to do. the impacts and the benefitsof the increase in quantity of implementation science activity haven't yet realized or yieldedthe benefits that we need. much of my talk will focus on presenting aset of frameworks and a set of answers to the question: why is it that implementationscience work has not yet led us to, some might say, a cure for cancer as a way of highlightingthe difficulty and the challenges associated with the problem. but i think it's fair to say -- and i knowmany of the speakers over the next couple

of days will reinforce this idea -- it's fairto say that we could be doing things somewhat differently in order to enhance the valueof what we do. as i said, though, i began at the va, theus department of veterans affairs, affiliated with the queri program -- a quality enhancementresearch initiative -- and that was a program designed to bridge the research-practice gap,to contribute to the va's transformation that i'm sure most of you are familiar with duringthe 90s, and to try to focus the efforts and energies and attention of the health servicesresearch community within va on the problems and issue of the transformation healthcaredelivery system. primarily due to health reform, all of a sudden within the health sector wesee a great deal of comparable interest outside

va. so i've been in the position of essentiallyrepresenting or carrying some of the insights and trial and error learning experiences fromva and systems like kaiser permanente, the ucla healthcare system, that are interestedin trying to understand and replicate what va accomplished and to do so in perhaps twoto three years, rather than 20 to 30. i've been given opportunities to help share someof those experiences. in some ways i often feel somewhat like amanagement consultant, back when i was in business school getting a phd, the mba studentsused to talk about the management consultation occupation and the job opportunities -- andthe shorthand label or description of that occupation was that a management consultantessentially steals ideas from one client and

sells them to another. in some ways, that'swhat i do lately between va, kaiser, and ucla. but it's all with full transparency, theyboth recognize the contributions as well as the benefits, and i think it's all above board. so let me move on and provide you with whati feel is a summary of the key topics or the categories of knowledge that i hope will becovered during the conference, and that to me represent basically what the field of implementationscience is all about. my hope is that at the end of this conference,you have a very good sense as to what implementation science is, what it's primary aims are, whatthe scope of the field is. why is it important? what are some of thekey policy, practice and science goals for

the field? how does it relate to other types of healthresearch. and as a researcher with a phd, a social scientist working in a health caredelivery system and health setting, the bulk of my work and what i know about is health.i know the work that you do is at the intersection between health and some of the social servicesand other sectors. so if you will excuse my focus on health and perform the translationin your own minds from clinical and medical issues to the broader set of issues, thatwould save me from the burden of trying to do that on my own, and speaking about issuesthat i don't know much about. in addition to the question of how does itrelate to other types of health research,

one of the key areas of focus of the queriprogram and my presentation will be item number four. what are the components of an integrated,comprehensive program of implementation research? what are the kinds of studies that we needto think about in order to move from the evidence, the kernels, the programs to implementationand impact. that is a bit of a challenge, in part becausethe components of the integrated, comprehensive portfolio are somewhat different from whatwe see in other fields. and finally, the broad set of questions. howdoes one go about planning, designing, conducting, and reporting the different types of implementationstudies. i will not touch much on that issue at all, but that is the focus of much of theother talks.

dennis covered the first two issues. whatis implementation science? why is it important? what i'd like to do is focus primarily onquestions 3 and 4. i will spend some time talking about number 1 and some time about5 as well. my focus again will be primarily, how does implementation science relate toother categories of health-related research, social science/social service research? andwhat are the components of an integrated program? let me go through a set of slides that providedifferent ways of thinking about what implementation research is all about, and why it's important.these are all somewhat different, but largely complementary. this is the usual story that we see, and theusual way that we think about implementation

research. and it's a story that for the mostpart does not have a happy ending. i'll say more about that in a minute. butgenerally speaking, what we see is the introduction or development, the publication of a new treatment,a new innovation, new evidence. often, at the time that that innovation is released,we do see some modest efforts toward implementation. most of those efforts focus on dissemination,increasing awareness. we see press releases that follow the publication of the journalarticles. in the health field, we often see greater impacts on clinician awareness whenstudies are reported in the new york times or time magazine than in the original journals.there are often editorials that are published that accompany the release of the new findings,that point out the importance of ensuring

they are adopted and used by the practicingclinicians. but by and large these are relatively modest efforts towards implementation, andthey focus primarily on dissemination, increasing awareness, with the assumption that awarenesswill lead to adoption and implementation -- an assumption that we know, by virtue of allof us being in this room, is not valid. a number of years later, we will often seearticles published that measure rates of adoption and for the most part show that significantimplementation gaps or quality gaps exist. so, yes, there was the publication of thislarge, definitive, landmark study. yes, there were press releases, efforts on the part ofspecialty societies and other professional associations to publicize and increase awareness.there were, at best, small increases in adoption.

and often times there are no increases inadoption. that finding -- the documentation of those quality and implementation gaps -- isthe usual trigger for implementation studies. and those implementation studies tend to belarge trials, often without the kind of pilot studies and single-site studies that dennistalked about, that i will discuss briefly as well. those are large trials that evaluatespecific implementation strategies or programs, practice change programs, quality improvementstrategies, that attempt to increase adoption. and the reason this story typically has anunhappy ending, is those large trials of implementation strategies tend to show no results. not onlydo we see a lack of naturally occurring uptake and adoption, but even when we go about explicitly,intensively, proactively trying to achieve

implementation, more often than not -- ormore often than we would like -- those effort fail. it's that unhappy ending that i willtalk about with some of the frameworks and try to explain what it is about the way thatwe are going about designing, conducting, following up on and reporting on implementationstudies that may contribute to those disappointing results. i'll also point out that implementation programsor strategies, quality improvements or practice change programs -- there are a number of differentterms, and i tend to use the interchangeably, and i'll actually talk a little about theso-called "tower of babel" problem as well. let me go through a set of slides, again,that provide a number of slightly different

ways of thinking about what implementationresearch is all about. this is a set of simplified diagrams thatreally derive from some work of the institute of medicine clinical research roundtable about15 years ago, as well as the nih roadmap initiative, again in the medical/health field. they basicallydepict the distinction between what we would like to see in a well-functioning, efficient,effective health research program. that is, basic science and lab research that very quicklyleads to clinical studies that attempt to translate those basic science findings andinsights into effective clinical treatments. some subset of those clinical studies do,in fact, find effectiveness of innovative clinical treatments. they're published, they'republicized, disseminated and we see the improvements

in health outcomes. as opposed to that idealized or preferredsequence of events, what we tend to see is depicted by the bottom set of arrows. the yellow arrows, if you can see those, depictthe idea that some proportion of basic science findings, as well as clinical findings, infact should not lead to subsequent follow-up because they don't have any value. there isno reason to follow up. a treatment or therapeutic approach that we evaluate for which the harmsexceed the benefits is one that should be published, should be well-known, but of coursewill not lead to improved health. so the yellow arrows are to be expected, they are appropriate.

it's the red arrows we would like to eliminate.the findings that are published, that have some value, that sit on a shelf for some numberof years and then eventually are taken up and moved into the next phases. this is just another way of depicting thesedifferent activities. of course, in the nih roadmap initiative -- and those of you whoare familiar with medical school and university campuses that have funding for a ctsa or ctsi,the nih clinical translational science award program. the bulk of that funding, and thebulk of the nih investment, as is usually the case, focuses on type 1 translation. translatingbasic science findings into effective clinical treatments. some label this "feeding the pipelinesof the drug companies. i won't spend any time

down that path. but the point is, much ofthe nih investment is on the type 1 translation. our interest, of course, is in type 2 translation.understanding the roadblocks that prevent a significant proportion of the innovativetreatments and findings from clinical research moving into clinical practice or achievingtheir intended or hypothetical or potential benefits and impacts. so, for the rest of my talk and the rest ofthe conference for the most part, the focus will be on this so-called type 2 translation-- the second translational roadblock. one of the important distinctions, and a distinctionthat is often not clearly made in research, is between clinical research, or clinicaltrials, versus implementation research.

this is one table that attempts to capturesome of the key features of both categories of research and helps us understand when weare conducting clinical research, when we are conducting implementation research, andwhen in some cases we're conducting hybrid studies that combine elements of both. for the most part, the study aims of a clinicalstudy, a clinical trial, are to evaluate a specific clinical intervention -- a treatment,a therapeutic approach, a prevention approach, management of a condition. the focus of thosestudies is on the effectiveness or the efficacy of that clinical intervention. these are drugs,procedures, different forms of therapy. the primary outcomes that we hope to effectthrough clinical studies, through the use

of this clinical treatment, are symptoms andhealth outcomes. the unit of analysis is that of the patient. so we are evaluating a newtherapeutic approach, a new school-based program, violence prevention, any number of kinds ofprograms or interventions. the term "clinical research" is used somewhatbroadly to capture that type of research. this is in very distinct contrast to implementationstudies where the focus is not on the clinical intervention, but instead on an implementationintervention -- a practice change, a quality improvement intervention. our focus in this case is on the adoptionand the uptake, the use of that clinical intervention by the clinicians, by the teachers, by thecounselors, or by the organizations -- the

institutions, the entities that are deliveringthat program. in this case, our interest is ultimately inachieving improvements in health symptoms and outcomes, but that's a distal outcomeand goal. the proximal outcome and goal of the implementation studies is to improve ratesof adoption. improve adherence to an evidence-based clinical practice guideline, improve the fidelityof delivery of that program. of course in this case, the unit of analysis is not thepatient, but instead the target of the intervention, and that is a clinician, a therapist, teacher,counselor, a team, a facility. this set of distinctions is important in clarifyingwhat we are doing, what are we trying to achieve, and the reason in part to be very clear aboutthis distinction is that different theories,

different kinds of research designs, differentevaluation approaches, different outcomes measures are applicable to clinical researchversus implementation. these are distinct categories of research.though again, it is possible in some cases to combine elements of both. but if that isdone -- and i'll talk about hybrid designs in a moment that are hybrid effectivenessimplementation studies -- it's still important to distinguish the aims, the goals, the outcomes,randomization, and so on. yet another way of thinking about implementationscience is the definition that martin eccles and i published in the opening editorial tothe journal implementation science in 2006. and this was adapted from previous definitions.

it's a two-part definition. the first partstates that implementation science or implementation research is the study of specific methodsor practice change strategies to promote uptake of research findings and practices into routinecare settings. the goal, of course, is to improve the quality, the effectiveness, andthe outcomes of health services. implementation science also includes the studyof influences on healthcare professional and organizational behavior. i've highlightedthat in blue because, in fact, when we were debating the title and scope of implementationscience we had a great deal of discussion -- and actually not as much consensus as wouldhave been ideal -- over the definition, over the term, and over the scope. we decided touse the title implementation science, but

in fact it would have been more appropriatefor us to label the journal "implementation science in health" because the focus of thatjournal is on health, and it in some ways ignores the fact that there is implementationscience in a number of other social sectors in other fields and domains. dean fixen wasamong those who was most vocal in advocating a broader scope. for a number of reasons includingthe fact that the publisher we decided to partner with -- biomed central -- was a biomedicalpublisher, we decided to limit the scope on health. but there has been continuing debate,and the global implementation conference that leslie and nancy had mentioned where we firstmet did have a broader scope. we continued to debate within the implementation scienceand health field how much effort we should

make to reach out to our colleagues in othersectors. one argument states that there are too many differences in those sectors to allowfor productive interactions, and it would be more confusing than beneficial to collaborateand try to achieve cross-fertilization. my response and my view is: look, if we weredoing fine on our own in the health field, it would be okay for us to say, we've gotthis covered, let's not confuse ourselves. the fact is, we don't have it covered. thereis a wealth of insights and approaches that we could begin to import and benefit fromin other fields, and we actually should be doing more of the sort of thing that you aredoing right now -- which is to bring together a group of individuals who come from differentfields and begin to exchange better ideas.

i saw an announcement just recently for the2015 global implementation conference -- which i hope many of you will have an opportunityto attend and contribute to this cross-fertilization process that we sorely need. so, this is a re-statement of the definitionof implementation science, which again is a way of trying to convey what implementationscience is all about. i believe if we were to go back and perform some sort of contentanalysis on the last two to three or more years of publications in the journal of implementationscience, the vast majority of them would fit very nicely into one or more of these categories. implementation science aims to develop strategiesfor improving health-related processes and

ultimately outcomes, and to achieve the adoptionof these strategies. this aim maps to the first definition on my previous slide. itfocuses on the interventions, the practice change strategies. the field also, though, as more of an observationalor basic science orientation and set of aims to produce insights and knowledge regardingimplementation processes. how does implementation occur in natural settings, under natural circumstances?what are the kinds of barriers and facilitators to implementation, to greater adoption ofevidence-based practices/effective practices? and what are the specific strategies thatwe can employ? finally, as with any field of science, implementationscience aims to develop, test and refine theories

and hypotheses and to develop methods andmeasures. again, another way of thinking about the question,"what is implementation science all about? what is its scope?" this is one answer tothat question. i had mentioned earlier the tower of babelproblem. it continuing to plague the field. it has to do with the historical origins ofthe field, but basically does represent an impediment to greater progress. we see fartoo much work published under different labels. there are competing kinds of arguments orviews or philosophies here. one of which says, look the field came from a number of differentorigins, all of us in the field just need to take the time to become familiar with theseterms so we can find the research and understand

what our colleagues are doing and to integratethat research ourselves. the counter-argument, which is the one i subscribeto, says, we have a problem that is not only one that is internal to us, but also external.we need to clearly state what it is that we do to highlight and to explain its importanceand to achieve better recognition and legitimacy and support. we can't do that if we are speakingdifferent languages. we need to somehow forge some sort of consensus and limit the scopeof terms that we use. i have another issue with what i often referto as the "t-word". translation. that comes from a number of concerns, one of which isthat type 1 translation and type 2 translation are very different kinds of research. usingthe same term to describe both leads to confusion.

the other problem is somewhat different, andthat is that there is an implication when we say "knowledge translation" -- and thisis not necessarily an implication that is perceived by all -- but to me it has the feelingof: we as researchers and academics possess knowledge that we need to translate to our,perhaps, less capable policy and practice colleagues. now, knowledge transfer and exchangeactually gets around that by talking about and implying that there is knowledge on bothsides, and we need to transfer and exchange, but knowledge translation tends to imply thisissue of a one-way transfer of knowledge and a translation to a dumbing-down of that knowledge.for all of these reasons, i've been on -- what seems at some times to be a bit of a one-mancampaign to stamp out use of the t-word. but

i think for the sake of clarity and consensusand consistency to present a unified front to our policy and practice colleagues andother stakeholders, it would be very beneficial for us to try to reduce the number of termsand to reach some consensus on what this field is all about. the result of this debate withinthe planning committee and the launch of implementation science, the journal, of course was "implementationscience" rather than other terms. so, let me briefly talk about one other areaof confusion. one other area that requires more work and more thinking in efforts toforge consensus. that is the distinctions between implementation research or implementationscience and quality improvement. there are a number of differences, and againthere are as many different opinions on these

issues as there are researchers active inthe field, or nearly so. for the most part, quality improvement tends to focus on specificquality problems that need to be addressed and solved right now. a common approach to quality improvement isto continue in an iterative manner to try different solutions until we solve the problem.it's a valid and appropriate approach because, as will become clear later in my talk, butespecially in some of the others, the breadth of barriers that we face, and the number ofdifferent solutions and factors that influence successful practice change is very large.we rarely can correctly guess the first time out what it is that we need in order to closea quality gap, to change a practice. so we

do need to continually try new things, evaluatetheir effect, and in this rapid-cycle, iterative manner continue to work until we solve theproblem. that's, in some sense, what quality improvementis all about. it's motivated by a quality problem, and you need to solve that problem. implementation science tends to be motivated,or to begin not with a problem but instead the solution -- the evidence-based innovativepractice or innovation -- and the idea that that innovation, in order to achieve its benefit,requires proactive efforts to achieve implementation. so, it starts from the solution and forgesahead in the practice-change direction. as a "scientific field" it attempts to developand rigorously evaluate an implementation

strategy across multiple sites, not solvinga single quality problem, and to do so in a way that is generalizable, and to developgeneralizable knowledge at the same time. in the interest of time, i'd like to go on.there will perhaps be time later on during questions to debate this more thoroughly,but again it's another area of somewhat limited consensus and confusion within the field thatrelates back to the tower of babel problem. let me return to this diagram, then go througha set of slides that attempt to illustrate some of the gaps in the pipeline -- some ofthe reasons why this very simple linear sequence is often not effective in closing implementationgaps and leading to increased adoption and uptake of the evidence-based practices thatwe attempt to sell and purvey.

the first gap in the pipeline is our thinkingabout what we mean by "clinical research." in some ways this is not a gap, but simplyan elaboration or an explanation of what i mean when i say "clinical research" and whatthat center of that main, large pipeline is all about. it's not just clinical research on drugs anddevices, but health behavior research, health service research. much of the work that youdo would fall under the health behavior category. but again there are many other categoriesof clinical efficacy and effectiveness research in different bodies and different domainsin social service sectors that are generating new innovations -- innovative treatments andstrategies. that is all the focus of what

i label "clinical research." the next gap i'd like to talk about is onethat i know larry green will address, and he and russ glasgow have written quite convincinglyand eloquently about -- and that is the distinction between clinical efficacy and clinical effectivenessstudies. as implementation researchers, we will oftenlament the observation that the newest latest, greatest finding to be published is not greetedwith extreme enthusiasm by the target clinicians whose behavior we hope to change. and they'renot quick to jump and change their practices immediately. our conclusion is: what is wrongwith these clinicians that they don't recognize or subscribe to evidence-based practice? whyaren't they adopting this finding? well, they've

been around long enough to know they justneed to wait around another three months or six months and yet another definitive, groundbreakingstudy will come out that shows the opposite finding. and again, larry will talk aboutthis later on, but the bottom line is efficacy studies are not ready for prime time -- theresults of those studies. we need to wait until we see the effectiveness studies thattell us something about whether and how this practice operates under real-world circumstances. we also, importantly, need to wait for theevidence syntheses or the evidence clinical practice guidelines that build on an entirebody of research, in an attempt to synthesize the research and develop conclusions thattell us something with a level of confidence

that we don't typically have from individualstudies. there are many other arguments and many otherreasons to focus on the effectiveness studies. in some fields within the medical researchcommunity, for example, there are studies that attempt to estimate the proportion ofall patients that meet inclusion criteria in some of the so-called "large, definitiveclinical studies" -- numbers of 15% and 20% are not uncommon. again, the practicing cliniciansin some ways and many circumstances know a bit better than we do as academics that theso-called definitive findings are not necessarily definitive, but perhaps more importantly,they're not necessarily relevant to their settings, their populations, and the kindsof real-world constraints and circumstances

that they fit. as an implementation researcher, we couldargue that the problem is not with us, in our ability to develop effective implementationstrategies -- the problem is with the goods that we're give. that the research results,the evidence-based practices, are not ready for implementation. that's -- as with manystatements in this field -- there is, of course, some truth to that. so, we can point the blameto those who precede us, but it's not all their fault. let me move to the next gap in the pipeline.this begins to move into the realm of the implementation researchers and what it isthat we do, or don't do, that might contribute

to the limited effectiveness and success ofour implementation strategies. that is depicted by the middle segment of the pipe: documentand diagnose quality gaps. the patient safety folks have figured thisout, and basically don't proceed with a patient safety improvement strategy without conductinga root cause analysis. most clinicians in medicine and other fields don't proceed witha treatment before they've completed a diagnosis. back in the 70s and 80s, and still far toooften today in the quality improvement and implementation science fields, we jump immediatelyto the treatment phase without taking the time to conduct the proper documentation anddiagnosis phase -- to conduct the root cause analysis, to truly document or diagnose ratherthe causes of the quality gaps or the implementation

gaps that we're trying to close. there is a cochrane collaboration review groupin the implementation science field that attempts to synthesize results of studies that evaluatespecific implementation strategies. these include opinion leader strategies that involvea respected clinician trying to convince targeted clinicians to use a new practice; they includeaudit and feedback strategies that involve documentation of practices relative to theevidence-based guidelines in showing clinicians how often they follow or do not follow theguideline; implementation strategies that are synthesized by the cochrane review groupalso include computerized reminders which in settings like va and kaiser involve a point-of-carereminder that reminds them of the appropriate

evidence-based practice. again, the cochranereview group attempts to synthesize the relatively small but growing number of rigorous trialsthat evaluate those strategies. the typical finding of those syntheses, thosemeta-analyses, is very weak effects and high levels of heterogeneity. these strategiesseem to work in some circumstances and for some problems, but not others. the problem with that sort of approach andthat way of thinking about implementation strategies as interventions is that it's comparablein some sense to the idea that one would conduct a meta-analysis of the effects of aspirinfor the treatment of headache and fever and hiv/aids and diabetes and language disordersand a number of others. those clinical treatments

-- those interventions are not meant to becure-alls that are relevant to all problems. if we take the time to diagnose and identifythe root causes of the quality gaps, of the implementation gaps, then we can sit downand think about an appropriate strategy, rather than going to some list of effective implementationstrategies and pulling off the strategies that seem to have the highest pooled effectsize. so that example, or that dimension of heterogeneity in the settings, in the problems,in the target clinicians -- and the appropriate matching of a practice-change strategy tothe characteristics of the problem is one we tend to ignore too often in the field ofimplementation science. in this middle segment, the need to fullydocument and diagnose the quality gaps, before

we move into the stage of proposing then evaluatingan implementation strategy is one of the gaps in the pipeline that offer at least a partialexplanation as to why it is that our implementation studies don't lead to a significant, sustainablepractice change. again, the problem is not just those who feed us evidence-based practicesand that they try to sell us efficacy findings, rather than effectiveness. the problem alsolies in the way that we take those findings or innovations and attempt to implement them. this is a part of a broader framework thatis covered in one of the articles i will list, that talks about the specific steps we followin the va's quality enhancement research initiative -- the queri program. this focuses specificallyon this middle segment in the previous slide.

the individual steps in documenting and diagnosingquality gaps or implementation gaps or performance gaps. again, in the interest of time i won't gothrough this in detail, but as with many of the other frameworks it serves as a roadmap,in a sense, of the kinds of steps that are needed to fully understand and address qualityproblems and evaluate solutions to those quality and implementation problems. this diagram -- and the looping arrow overthe top is what's new here -- is meant to point out the fact that even though in manycases we do need to intensively and explicitly try to implement new practices, at the sametime that we in the va with kaiser, for example,

are working hard to design our implementationstudies, submit grant applications, and wait for funding. try to convince the sites thathad agreed a year ago to participate that they should still participate, and doing everythingwe can to try to achieve successful practice change -- at the same time that we're goingthrough that process, practice leaders and policy leaders back in washington are implementingnew programs within va all the time. the number of insights that can be derivedfrom those naturally occurring, or those policy and practice-led implementation efforts, areoften much greater than the kinds of insights that we can -- and often are not able to -- derivefrom our experimental studies. dennis mentioned earlier, in response to oneof the questions, the problem of grant reviewers

not liking to fund the single case studies,the observational studies, and so on. one of the reasons that we don't see more observationalstudies is not only the fact that reviewers don't like to fund them, but researchers don'tlike to conduct them. everyone is interested in developing the latest, greatest effectivestrategy and evaluating strategy using the so-called gold standard rct approach withan emphasis on internal validity, and showing a significant change. the problem with these trials, and the experimentalas opposed to the observational trials, is that they tend to be very artificial whenthey are researcher led. as researchers, we identify a set of priorities, and we attemptto convince our practice and policy colleagues

to pursue those priorities which may not betheirs. the observational studies maximize externalvalidity, because they rely on the study of what naturally occurs, rather than what isoccurring in a researcher-based-led manner. they allow us to use much larger sample sizes,and maximize our power to detect contextual influences. i talked earlier about heterogeneity.there is considerable heterogeneity across different clinics and hospitals in va, acrossdifferent schools, across different clinics within kaiser. if we have 10 or 12 key contextual factorsthat may include organizational culture, and size, and staffing composition, and leadershipturnover, and budget sufficiency and stability

and staff stability -- by the time we getto a dozen of those, our ability to understand the influence of those contextual factorsin a typical rct, where we're limited often times because of cost considerations to 20or 30 sites, is quite limited. in the department of veterans affairs, withabout 150 hospitals and close to 1,000 outpatient clinics, and with a good electronic data system,we have the ability to understand the effects of those contextual factors. often times inthe implementation world, the main effect of the practice change intervention is muchweaker than the effect of the contextual factors. there's an extreme version of this argumentthat basically states that if we're trying to change practice or improve quality in amedical setting, it actually doesn't matter

much which implementation strategy we use.what matters is the kind of leadership in place in the settings, what kind of staffexpertise and commitment to the culture, and so on. those contextual factors. that hassome significant implications for power and study design, but again it points out theimportance of contextual influences. to use a clinical parallel, it's comparableto saying, if a patient presents to us with a given chronic disease, it actually doesn'tmatter much which medication we decide to prescribe. what matters is the patient's homeenvironment, whether it's stable, if they have a good job, supportive spouse, live ina neighborhood where they have access to good food and exercise and so on. it's all of thoseother factors. of course, in the clinical

world, those other factors are important,but the main effect of the intervention, of the treatment itself, is important as well. in the implementation world, often the effectof that treatment is relatively weak. and the observational approach allows us to understandthose contextual factors. they also allow us to understand local adaptation processesand the effects of those. pills can't be modified. they come in a bottle,we can perhaps prescribe them in the morning or night, with or without orange juice. wecan prescribe some supportive therapies. but we don't have the ability to modify the compositionof that pill. implementation strategies, practice changestrategies, can be modified. they are modified

and adapted. they should be modified and adapted.and we should be studying how to guide that adaptation rather than ignoring it or attemptingto achieve artificial fidelity to a program that doesn't fit many of the settings in whichwe study it. again, the observational approaches allowus to study and understand those adaptation processes. let me go through just one final framework,then we should have about ten minutes or so for questions. although i've argued in many cases that weshould prefer to study naturally occurring implementation processes using observationalstudies, there are instances where we do need

to use an experimental or interventional approach.the lower right hand corner, for those of you who can see the wording, is an attemptto specify a sequence of experimental studies or trials that will allow us to make betterprogress in the implementation world. there are basically four phases. this is a frameworkthat is based in part on the fda four-phase framework for a drug trials, as well as theuk medical research council framework for evaluating complex interventions. it drawsin elements from both but is not precisely the same as both. this, which is my last slide, is an attemptin words to describe the different phases. it points out, again, under phase 1 the needfor us to conduct the single site case studies.

in the vas queri program, the quality enhancementresearch initiative, which we launched in 1998, there was great deal of pressure toshow impact and show value very quickly. as good medical researchers, we as queri researchersvery quickly designed large, rigorous randomized trials of implementation strategies, and wevery quickly learned large lessons as to why those implementation strategies were not likelyto be effective. we discovered some barriers and some flaws in implementation and thosepractice change strategies very quickly. but these were trials. and in a trial you maintainfidelity, and you maintain fixed features of the study so you can show with high levelsof internal validity the intervention-controlled differences. there was reluctance to allowfor any of the modifications that, in many

cases, were staring at us very clearly asto what was needed. the single-site pilots -- or two or threesite pilots -- allows us to very quickly learn those lessons very cheaply and quickly, intwo to three months. the issue of funding: within va we have the advantage of core fundsprovided to the queri centers that allow them to fund these pilots internally without goingthrough a six to twelve-month grant process. it may be, as was suggested, that's a potentialrole for the foundation. but again, we need to continue to work on nih and nsf and ourother funding agencies to point out to them that moving immediately to a three or a fiveyear, $500,000, $5 million rct without first doing the pilot funding is inappropriate anda waste of the research funds, as well as

the time and effort of those sites that areparticipating. we need to begin with those phase 1 pilots. we then need to move into the efficacy-orientedsmall-scale trials that will tell us something about the likely or theoretical effectivenessof a given practice change strategy, under best-case circumstances. which, in the caseof implementation studies that are funded with grant support, often means high levelsof study team involvement at the local sites in providing technical assistance and support,in exhorting the staff to keep with the program and follow the protocol, oftentimes the fundsare used to support new staff and to provide for the training and supervision. often wehave hawthorne effects due to the presence

of the research team, and the measurement. as with any efficacy study, this is a methodfor evaluating whether, under best-case circumstances, a practice change strategy can be effective. we often, in the implementation field, completethose studies, write them up, and essentially brag a bit about our success in improvingquality. and then we walk away and go onto the next innovation, the next innovative practicechange strategy. as is the case with the clinical studies thatwe use to develop new evidence and innovations, the efficacy oriented studies are not enough.we need to follow those with an effectiveness study, where the grant support does not provideadditional resources, where the study team

is not onsite on a weekly or monthly or dailybasis. and where we have conditions of effectiveness research. we need to demonstrate in a muchlarger, more heterogeneous, more representative sample of settings and circumstances thatthis innovative practice change strategy can be effective. then and only then can we turnover that practice change strategy, in the case of the va, to va headquarters. and encouragethe national program office responsible for quality improvement, for example in hiv/aidscare, to deploy that program nationally. and at that point, as with any phase 4 study,our role as researchers is to provide arms-length monitoring and help to observe how the programis proceeding, whether it warrants refinement, what are some of the areas where the programseems to be working better than others and

why that is. again, another framework that serves to guidethe design and conduct of an integrated portfolio of implementation studies. before concluding,what i should say is these are idealized frameworks, many of them in fact are not completely feasiblebecause the number of years it would take us to go through each of these is beyond whatwe should be spending. i mentioned but didn't talk about hybrid studies that combine elementsof clinical effectiveness and implementation research. there are also hybrid studies that combineelements of pilots in phase 1 or phase 1 and phase 2. these are also very linear, rationalkinds of frameworks that don't necessarily

describe the way the world works as much asthe way that in some sense we might like the world to work. i won't spend any time trying to talk abouthow it actually does, i know that will be covered to some extent in the subsequent presentations.it also is an area that needs more activity and research and contributions from all ofyou. again this is to provide an answer to someof the key questions: what is implementation science, how does it differ from other formsof research, and how do we go about thinking about the design and conduct of a portfolioof implementation studies. with that, let me stop and open for any questions.

[applause] >> i'll start off with the questions. my nameis kathy yorkston, i'm at the medical school at the university of washington, up the roada ways. i work with people -- adults with motor speech disorders. parkinson's, als,ms. we know a lot about the basic physiology of speech production and we are developinga lot of strategies to help people sound better. it's that implementation part: we get themsounding good in treatment and say, "go out and communicate well." you used the term "root cause analysis" -- weknow so little about how people with communication disorders function in the real world. wouldyou talk a little bit more about what you

mean by "root cause analysis" and might weimplement that in our situation? >> a couple of thoughts. one is that, if i'munderstanding you correctly, i would characterize the issues and the problems you're describingas within the domain of the clinical research, rather than the implementation research. if we talk about the effectiveness or lackof effectiveness of the clinical strategies that are used to support the clients, thepatients, and to achieve better performance on their part, and those don't have lastingeffects. i would see that as a flaw in what i've labeled the clinical intervention. there probably is a role for root cause analysisin trying to understand what it is about those

clinical treatments that don't allow themto have sustained effects. but the implementation research would focus on the goal of tryingto ensure that the therapists are delivering those effective clinical strategies. rootcause analysis in the implementation world is all about what is it about the therapiststraining or their attitudes toward evidence-based practice, or the kinds of constraints thatthey deal with in their daily worklives. lack of time, lack of support, lack of skill, toprevent them from delivering those therapies with fidelity. it does sound to me like theproblem here is in the limited effectiveness of the clinical treatments. >> let me just give you an example. i reallydo think it's in the implementation. a lot

of the people that we talk to -- and we talkto a lot of people about "what did you like and not like about our treatment" they say,"i was discharged at x amount of time because my funding ran out." when you go to clinicians,"i stopped treatment not because it was not needed, but it was because i didn't have thefunding." that's not a part of our treatment, that's a part of the implementation policy. >> sure, you're right. and in some ways, acombination. because the treatment that is efficacious under best case circumstances,but is unfeasible is not likely to be effective. and the effectiveness or lack there of, limitedeffectiveness, is a combination of some features of the treatment that are not scalable, aswell as some features of the implementation

process and the broader context that doesn'tallow the clinicians to provide the proper training and use the treatment in the wayit was designed and intended to be used. again, i think the root cause analysis, to get backto your original question, is appropriate for both of those. it is a matter of understanding,is it a matter of comfort on the part of the therapist, is it a matter of the regulatoryor fiscal policies not being supportive? do we lack the kind of equipment that we need?and so on and so forth. those are the kinds of potential causes for poor fidelity andpoor implementation that we need to understand. much of the early work in the implementationfield in medicine focused on better strategies to doing medical education. that assumes thatthe problem is education. often times clinicians

know exactly what to do. they don't have thetime, they don't have the staff support. patients are resistant. there are a number of otherbarriers. no amount of continuing education will overcome those barriers, so we're solvingthe wrong problem. that's where the root cause analysis or the diagnostic work is all aboutidentifying the causes so we can appropriately target the solutions. >> and the methods are interviews? or goingout to stake holders? >> keep going. >> or looking at databases? >> keep going. yes. all of the above.

>> i'm [inaudible] from the university oforegon. i work with folks -- i study cognitive rehabilitation with people who have acquiredbrain injuries. i'm wondering if you could say a little bit more about the blurrinessor trying to handle the blurriness around establishing efficacy. if you really needto have established efficacy before you're going to study implementation, and maybe wayswe can contend with not having that be unidirectional -- but that the implementation affects theefficacy, as sometimes it's, as you mentioned, it's difficult to -- efficacy is establishedit depends on candidacy issues. it's not as clear. that may be answered in the notionof implementing hybrid -- i may be pushing you to talk a little bit about the hybridmodels.

>> it is. i think the hybrids relate to bothof the previous two questions. one of the papers i will circulate lays out the hybridconcepts. the idea is that this linear process would take far too long. we often have enoughevidence that something is likely to be effective. and we also have enough concern about theeffectiveness, depending on the implementation strategy, that we really need to do both atonce. so there are instances where, first of all, where we are conducting an effectivenessstudy and we begin to collect implementation-related data. if we are evaluating an innovative treatmentin a large, diverse, representative sample of sites without providing the kinds of extrafidelity support that we do in an efficacy study, that's our best opportunity to beginto understand something about the acceptance

and likely use, and barriers to appropriateuse of that clinical treatment. so we begin to gather implementation-related data. but when we are focusing on the implementationstrategy, we need to continue to evaluate the clinicaleffectiveness and the clinical outcomes. if we have a truly evidence-based practice, andwe know from a large body of literature that delivering that practice will lead to betteroutcomes, we can focus only on implementation. we know that if we achieve increased utilizationof that practice, those beneficial health outcomes will follow. but oftentimes that evidence base for theclinical questions is not sufficiently well-established.

there are interaction effects between clinicalefficacy and effectiveness and implementation and fidelity, and we need to be doing bothsimultaneously. that's what the hybrid frameworks are all about. studying the success and effectivenessof a practice change strategy to increase adoption, to increase fidelity, and continuingto measure clinical effectiveness so that we can determine whether we see the benefitsas this practice is deployed in real circumstances, with different types of implementation supportor practice change strategies. so, our work on the clinical effectivenessside never ends. and again, distinguishing between these different aims and understandingtheir implications for sampling and measurement and analysis and so on is one of the key challenges,and one of the goals of the paper i was involved

in drafting with some va colleagues. >> i'm barry guitar from the university ofvermont, and the area that i work in is pre-school stuttering. we've been using a program forabout 14 years, adopted from australia, called the lidcombe program. so, there are many studiesby the group in australia and ourselves as well showing that it's efficacious. now it'simportant for us to move into the effectiveness domain. but after reading everett rogers,i'm highly aware that the context is really important, that the culture is really important.i wondered if you had any tips about how to assess the resistance, because there is alot of resistance, particularly in the united states, to this program. i'm thinking okay,this is a cultural issue. is it the culture

of the speech-language pathologists who areworking with pre-schoolers that has been influenced by some of the belief that you shouldn't talkabout stuttering to a pre-schooler? or is it a problem of the setting? in other wordsis it not easy for the slps to make it work in their setting? so any tips. >> again, i think the answer is probably allof the above, but if you recall the queri step 3 slide, where i showed the distinctsteps in the diagnostic process, one of those steps is a pre-implementation assessment ofbarriers and facilitators, and the idea that we spend some time using the methods thatwere mentioned -- interviews and observation and so on -- to try to understand and makesome educated guesses as to the response to

an effort to implement this therapeutic approach,and to identify some of the key barriers and see what we can do to overcome them. but ultimately,we won't identify those barriers until we actually get out and begin to implement theprogram. was it kurt lewin who talked about the need to attempt to behavior as a way ofunderstanding barriers, rather than in an a priori manner, assuming we can correctlyproject them. there are some other frameworks in the fieldthat focus on the multi-level nature of influences, barriers, constraints to practice change,and talk about the different kinds of strategies that would be needed to address the individualclinician factors, the setting factors, the broader social-cultural, regulatory, and economicfactors. typically the answer to the question,

"what is impeding implementation?" is "allof the above." it's all of these factors. and our implementation studies only tend tofocus on one or two. if we only focus on education and knowledge, again, we're doing nothingabout that broader spectrum of factors. the answer to the question is, the barriers andthe influences occur pretty much everywhere we look. it's a much more complicated setof problems, and we need to be thinking about and using another set of frameworks withinthe implementation science field in health that identify contextual factors, beginningfrom the regulatory and broadly social-cultural, all the way down to the front line deliverypoint-of-care factors. and think about all of those as potential barriers to practicechange, as well as potential targets for our

implementation efforts and our practice changeefforts. >> thank you. i'm edie strand, i'm from themayo clinic. i'm just learning so much, this is so interesting. to go back to kathy's point,for me the rapid and frequent changes in terms of healthcare reimbursement has caused mehavoc in terms of my treatment efficacy research because in the middle of it, i'm realizingthis person isn't going to be able to continue beyond -- for example, right now i have aperson with severe apraxia of speech who is turning 65 in a month. as soon as she turns65 there is a serious effects of the therapy cap and that sort of thing. they are freakingout and asking me what are we going to do. well, we're changing our treatment to bringin the whole family, to do more functional

communication. what i really want to do iscontinue my study if whether this treatment i've devised is going to help her with effectivecommunication. so in the middle of the single-subject design work that we've learned is important,which i'm happy to hear since i'm in the middle of doing that -- i'm doing a change rightin the middle because of what the government is telling me -- so, i know you can't solve that problem, buti want to bring it up as an issue for discussion. i'm anxious to hear more about the hybridstudies. and any advice people can give us when we have to interrupt our nicely designedsingle-subject designs in the middle to meet the patient's real needs, related to reimbursement.it's really a comment that i hope we will

discuss more. because we have to do the treatmentefficacy work, first, before we can go on. it's a big problem for me right now and ifanybody can give me some good advice, i'd be grateful. >> i can offer one quick response, then iknow dennis, i'm sure, has more to add. my response is we need to be thinking about doingboth. by both i mean trying to develop and evaluate therapeutic approaches that we believewill be effective and feasible given the current socio-economic, regulatory environment. butat the same time developing therapeutic approaches and studying and evaluating them even thoughwe know that right now they're not likely to be feasible, sustainable, scalable. andthe reason is when we demonstrate significantly

greater effectiveness, that's the evidencethat we need to lobby for the changes. i think the important thing is to recognize from dayone -- there's an important concept in the field, and someone may talk about it later,of designing for dissemination. and that's again, acknowledging from the very beginning.and it probably should be "designed for implementation," but that doesn't sound quite as good. thinkingfrom the very beginning about designing for feasibility -- but we shouldn't limit ourselvesto the kinds of approaches that are likely to be feasible. we need to be more innovativeat the same time. dennis? >> i just really enjoyed you talk. one ofthe things that came to mind is, one, i think

it's really important to do the eco-anthropologicaltype of investigation before we go do one of these things. because one of the thingsyou find out is all these little stupid barriers. i loved your idea about looking at the regulations.i think concurrently going to the end point and observing a whole bunch of people justhaving that therapeutic interaction tells you a whole lot about what are the contingencies. i'm reminded, early on, in behavioral analysispeople talked about doing an eco-behavioral assessment before you actually designed anintervention to pull out some of these pieces. the other thing i think is often forgottenis trevor stokes' paper and don baer's paper on the technology of generalization. and somany people don't think about the generalization

features in the design of their study. yourpresentation convinced me that the thing i decided to start doing along time ago was,i know, whenever i'm in a phase 1, just run it as a mini-effectiveness trial rather thanan efficacy trial because, if i do the efficacy thing and it works, but it won't work in thereal world it's kind of dead. it might be a good thing for a publication but -- i'mlearning we have to think about the effectiveness right off the bat. >> and i think that is the key point to thinkabout these issues. there aren't necessarily right or wrong answers, but to be aware ofwhether you are studying clinical effectiveness or implementation or both. understanding whetherit is an efficacy or effectiveness study.

and thinking down the line, what is the nextstep? what follows this study? it's not enough for us to complete our work, publish it, andsay, "my job is done, someone else will come along." that's what leads to these roadblocksand these long delays in progress. that's what leads to the criticism that too muchresearch is beneficial for the academic's careers, but doesn't have much value or benefitfor society and that's not why we're here. >> hi, i'm catriona steele from toronto. ireally love the sort of diagnostic paradigm you're putting on this problem, and it makesme feel like this is our business. we are supposed to be doing diagnostic root causeanalyses before we do our clinical interventions and we need to do that here, as well, in research.i work in the area of swallowing disorders

and one of the interventions that's attractinga lot of attention is the idea that people need to clean patients mouths to deal withbacteria formation. if you go to the literature there are, first of all, arguments about whois supposed to be cleaning people's mouths -- but it's really the most depressing literaturei've ever discovered about how hopeless in-service education is. i'm just wondering whether youhave a provocative paradigm shift to offer about in-service education because i thinkwe're sort of perpetuating the same problem. >> to me the phrase that captures this concept,"necessary but not sufficient." the education is almost always necessary, but it is notsufficient. thinking about necessary but not sufficient conditions for practice change,and thinking about multi-level, multi-component

kinds of practice change interventions andprograms. i actually, in addition to my dislike of the t-work, i dislike the i-word for practicechange. these are not interventions, these are implementation programs or campaigns thatare multi-faceted, they have multi-components, and the clinical intervention is what we tryto implement using an implementation strategy or program or campaign. the needs includemultiple elements that we sometimes mix and match --sometimes within a single study atdifferent sites. because in some cases this is a leadership problem or a culture problem,and in other settings there is not. it makes for a very complicated but very interestingset of challenges. we need to acknowledge, rather than ignore and hope that through themagic of randomization all these factors will

disappear. the main effect of any given component ofan intervention -- education and others -- is very, very weak. and without combining a setof intervention components within a campaign or program, we're not likely to see any practicechange, let alone sustainable, widespread practice change.

No comments:

Post a Comment