Wednesday 11 January 2017

Nursing Care Plans For Chf

i would now like to introduce your moderator for today, dr. john white, director of ahrq health it portfolio. dr. white, you now have the floor. [dr. white:] thank you so much, i appreciate the introduction. welcome everybody, delighted to have you join us on a thursday afternoon, at least on the east coast. very pleased to be able to bring you a great panel today. i think that as you go through the presentations that you're going to hear, you're going to find a lot of detail. those of you who are saying "how do i do that?" and "where do we begin?", i think you will hear stuff that will satisfy and intrigue you today. we have some real experts here with us on the line. i'll introduce them one by one.

i do need to offer you our disclosures, for moderator and presenters. there are no financial, personal, or professional conflicts of interest to disclose for the speakers or for myself. so with that, let us move to our first presenter, who is lynne nemeth. lynne nemeth, ph.d. and r.n., is an associate professor of nursing at medical university of south carolina at charleston, sc. dr. nemeth divides her time between teaching and research. her specific research interests focus on quality improvement and patient safety and primary care, promoting preventive services, and a patient-centered approach to practice development using electronic health records in process evaluation. lynne has been a coinvestigator, project director, and principal investigator, with intense studies over the past decade, funded primarily by ahrq and nih,

within pprnet, which is a practice-based research network with 179 active practices in 39 us states. lynn has done a lot of fantastic work for us, and now i am going to let her tell you about our stuff. take it away, lynne... [silence for several seconds.] [dr. nemeth:] [--] which is [signal breaking up] coming from the synthesizing lessons learned project using health it. this particular project was designed to implement - to identify the strategies, barriers and facilitators of successful quality improvement efforts by practices that are using the same electronic health record, which is the mckesson practice partner ehr. in this study i synthesized lessons learned in seven distinct studies that translated research into practice. the studies all were quite different, so it provided an opportunity for us to really explore how our network practices really made change happen. i'll tell you a little bit about pprnet.

pprnet is a practice-based research network that's consisting primarily of practices that are primary care practices. they are usually small, community-based primary care practices. some of them are larger, they are urban and rural, all over the united states. our aim is to answer community-based health care questions and engage in quality improvement activities with our practices, and transcend individual projects so that we have an ongoing network that people in these practices can be affiliated with, for their development and continual learning and improvement. so we are indeed a learning organization. as i said, we consist of small to medium-size primary care practices in 39 states. we've had as many as 224 practices as members, but there is variation in those numbers due to different arrangements and purchasing of systems, numbers, and changes in ehrs. we see some change in the numbers, so they do vary.

but overall, it is great that we have a wide national presence, and you can see on our map that we are located in many of the states - a majority of the states. our aim is to turn clinical data into actionable information. we receive practice-based extracts from practices' ehr data, and then we work with the practices to test theoretically sound interventions to improve health care quality. we are disseminating successful interventions, and for the practice we are doing quality improvement, and for the funder we are doing research. practices really like the idea but they are improving, and that's what is the hook for them, and we know that we are contributing to the research data on how this improvement occurs. so again, as i mentioned, the background to the study is that we have been doing a diverse set of primary care studies, and all the findings and focus of these studies were very project specific.

after a while we wanted to know what is the learning about these studies, about overall how practices make improvement while using health information technology, and, more importantly, as we develop towards high-performing primary care teams that are rewarded for their quality, we wanted to know what is needed to develop these teams. here is the background on the seven studies that were in this particular study, this ro3 that i led. our first study was trip-ii. it focused on heart and stroke secondary prevention. it was funded by ahrq in 2001 and ran for two years. we then expanded, in the next study, to accelerating translation of research into practice, where we got involved with - we got 100 practices in our network to participate, and we expanded the set of indicators that were generally relevant to primary care practice.

around the same time, we had a collaborator who wanted to study alcohol screening and brief intervention, and he found pprnet to be a good place to take that message into primary care. so, we collaborated with dr. peter miller to do our first alcohol study, which was an r-25 funded by niaaa. we then got involved in colorectal cancer screening through nci, medications safety study through ahrq, which was very interested in looking at medication safety issues and patient safety issues in their portfolio. we became involved in a task order from the practice-based research network master contract to do a standing order project from 2008 to 2010, and that focused on screening, immunizations and diabetes. and lastly, we just concluded our second alcohol study, which focused on adding medication to the expert intervention through niaaa. we started with a model for how to activate practices, and this model grew out of grounded theory studies in our first trip-ii study.

the model had five elements, which involved prioritizing performance as a practice - making sure that practices, kind of, set a goal to do better. involving all staff, which is self evident, making sure that all of the staff were on the same page about what they were prioritizing performance on, and then taking steps to redesign the delivery system. of course, activating the patient and using emr tools was the key thing that was going to help them focus their efforts. additionally, we also developed a practice development model. this was - it took a little deeper look - it actually was my dissertation study, which was concluded in 2005 but published in implementation science in 2008. it involved having a vision with clear goals, involving the team, enhancing communication systems, developing staff knowledge, really so that they could understand the rationale behind what they were doing,

and taking small steps, all with the core vision of assimilating the electronic record to maximize their clinical effectiveness, and then using feedback from performance reports that were provided from pprnet from their practice extracts, to be able to reset the vision and priorities for improvement. we found that this was very helpful for practices, to set practice goals for improvement and actually get improvement happening in the practices in pprnet. the method for this synthesis study was to do a secondary analysis of mixed methods data from the seven studies. we took field notes and observations at practice site visits during all of those studies. we also conducted network meetings, which were annual ce learning events, where practices came in and learned from each other, learned best practices, and got ideas about what they might be able to do in their own practice; memos that we took during those events; correspondence that we had with practices; and then, specific interviews.

all of this data was merged within an nvivo database, and i, particularly, as the pi on this project, immersed and crystallized the data, and created a cross-case comparative analysis matrix. it was basically taking the key features that were found across all of the studies, because that is what we are looking for: what are the lessons that cross all of the studies, regardless of specific focus? we then conducted member checking by the practice members to make sure that we were right with what we were coming up with, and to revise that as needed. the findings of this were that we had 134 practices engaged in research with pprnet during those seven studies, and we formed a collaborative learning community in those practices. we also, most notably, found that practices used their health information technology and staff in new ways during the course of those seven studies in that decade,

and that these were really complex interventions that were being implemented, and that it relied on four main concepts, which are very similar to the initial two models that i showed you. but maybe more parsimonious, and these were: developing a team care practice, adapting and using hit tools, transforming practice culture and quality, and activating patients. let's take a look at the model. i tend to be a very visual person, and actually i found going to practices and presenting them with a vision, helps them generate some action on this. so, taking a look at the model: activating patients is in the center of the circle here, and it is all about transformation of practice culture and quality, developing a team care practice, but of course, adapting and using hit tools that this whole business is all about.

in order to do this you need investments in hit resources. you need to - practices need to set aside the money, the financial resources to buy the system that works best for them. they need to set aside the resources to educate and develop their practice team and make sure that everybody knows how to use that, and they also need to establish leadership in the practice so that there is a go-to person in the practice who can help resolve issues regarding all of these concepts of team, tools, transformation, and patient activation. we found that - improved outcomes is the focus of this, and so, our practices were actually making improvements on their pprnet report data, which there are numerous measures in our report: ncqa measures, patient-centered medical home criteria, cms and meaningful use attestation were starting to happen, so improved outcomes that were relevant to the practices, were being seen. rewards, well, when practices started to actually see financial alignment with their quality measures, they also were able to retain staff and providers.

there was a better fit with everyone in their role, and how this worked together. of course, that is not to say that there is not variation in this, but overall, these were some of the key lessons learned. to give you an example of what the specific strategies that were seen across these different studies, this matrix here shows how practices actually kind of evolved along the way. the - involve all staff, for example, started in the early 2000s as new roles and responsibilities, clinicians agreeing to decrease practice variation. and that moved on to the next phase of it, where structured screening tools were used by medical assistants and nurses, and that there were teams closing the loop - after this process of better defining the roles, providers could close the loop on what was needed. and that, from 2007 onward, we started to see things like medication reconciliation and outreach is needed,

that teams were actually starting to do things that they did not do before. in terms of adapting and using hit tools, there was general agreement that staff increased their use of the ehr, from using a very small bit of it, to using much more of it in the early stages, and then moving forward to starting to use the specific templates for decision support, revising and editing their utilities within the record, adding macros, applying age-, gender-, diagnosis-, and medication-related templates, and, of course, lab interfaces, scanning, and e-prescribing. we saw in the medication safety study that we were doing, much more use of these templates being applied - medication and diagnosis templates - that would stimulate appropriate consideration of medications prescribed and dosing alerts. transforming practice culture and quality focused on things like emphasizing goals, celebrating successes, having quality coordinators,

to having liaisons actually lead projects within the group, staff education and standing orders, and explicit practice culture that is rewarded by pay-for-performance rewards, and again, seeing that performance reports were starting to be used for things like outreach, refill protocols, printed medication lists, etc. for activating patients, they started out using handouts and posters, having events, press releases about what is going on in the practice, to having very targeted messages that focused on the kinds of studies that we were working on at the time, like - so that the alcohol messages and the colorectal cancer screening, and the follow-up for completion of tests for diabetics, etc. we started to see the use of patient update forms, where patients were asked to bring in all their meds, have a medication reconciliation, go through the standing order, find the health-maintenance procedures they might be due for, and having things like long appointments for medication reviews.

so, what were some of the barriers we found along the way? well, some included having lack of practice leadership, not having a leader, or someone who was willing to take leadership on setting goals/vision in the practice. not having agreement by providers on that this is the approach we should be going towards as a practice, and we are going to - you know, the idea of, just, we're providers and we do it all our own way. some other barriers included not having the technical support and expertise to use their tools as they should. of course, turnover from staff and provider perspectives. organizational change and practice ownership were issues that set practices up for trouble with having continuity and making progress. and, the differences in which individuals use hit to facilitate the workflow in their practice.

however, equally, we found they were very clear facilitators. practices had policies and protocols. they started to do more with their staff education, and have leaders in the practice follow-up. communication systems started to become more improved. the tools and templates that staff were now using were improving workflow and efficiency, and this kind of practice-wide approach reinforced the staff adoption of extra responsibility and expanded roles. and, having providers close the loop on what staff initiated, actually helped patients to adopt what was being recommended, because they were hearing it early, and hearing it often - a repeated message. in conclusion, practices expanded their use of the ehr, they added many enhanced features to support quality improvements,

they actually recognized the value and asset of their staff at being able to support quality improvement goals, and they often shared those rewards from pay-for-performance bonuses with their staff. these kinds of recognition and rewards have been very powerful motivators to the practices that have done well, and, in fact, patients are very receptive to the expanded roles of the practice team. so - with that, i will acknowledge my funder, ahrq, for supporting this ro3, the pprnet practice providers and staff, of course, the research team and pprnet, which has a long history of collaboration, and the consultation that i had from brian mittman, lisa rubenstein, and elizabeth yano from the va system

on implementation science, coaching, and understanding some of the mechanisms behind all of this. with that i close. thank you. [dr. white:] thank you so much, lynne. wonderful presentation. a good description of what it is like to be boots on the ground with this stuff, with actual evidence behind it, so thank you so much. i do want to briefly mention two aspects of the presentation. the first is that if you look at the bottom of your screen, if you are following along online, there is an area for questions and answers, so if you do have questions, feel free to submit them, down there at the bottom. if there is time between presentations, and there are clarifying questions for the specific presenter, i will try to ask them between presentations.

but otherwise, for some of the more general questions, we will ask them of folks at the end, once we are through with all three. the other question that had been raised up to now, that i want to address, is, will these slides be made available? thank you, i think that means you find value in these slides that you are being presented. all of the materials for this, i think including the actual recording of the teleconference, will be available within two weeks, on the ahrq health it events site, which is http://healthit.ahrq.gov/events . so that is where you will find materials, and it takes a little while to make sure everything is cleaned up, and compliant with all those standards we have to follow, that is why it takes two weeks. thank you so much. let us move on, to the excellent dr. parsons' presentation on improving quality of care in ehr-enabled practices.

dr. parsons, m.b.a., m.d., is the deputy commissioner of the health care access and improvement at the new york city department of health and mental hygiene. here at ahrq we like to call her farzad [mostashari] 2.0, because that is who preceded her in her job. she oversees the primary care information project, correctional health services, primary care access, and planning bureaus for the agency. prior to that dr. parsons served as pcip's assistant commissioner, after serving as the director of medical quality, where she was responsible for creating and leading the quality improvement billing consulting and emr consulting teams deployed to pcip's small physician practices. prior to joining the department of health, dr. parsons worked for mckinsey and company as an engagement manager, serving clients in the pharmaceutical and medical products and global public health sectors. she earned her m.d. and m.b.a. from columbia university, and completed medical postgraduate training at beth israel medical center in internal medicine.

dr. parsons is a frequent presenter and panelist, focusing on quality improvement, electronic medical record adoption, patient-centered medical home, and meaningful use. with that, dr. parsons, please take it away. [dr. parsons:} thank you so much, dr. white. it's a pleasure to be invited to speak with you today. i hope that my presentation will cover some of the learnings that we have elucidated from the many years of working with small practices, community health centers, and other settings in new york city. i will first give you a brief overview of our program, to lay the context, and then i will go right into the lessons learned. for those of you who aren't familiar with the primary care information project, this was a mayoral initiative funded by mayor bloomberg back in 2005. it was the mission of the health department, obviously, to promote health and to prevent disease,

and our specific bureau's mission was to do that, leveraging health information technology. which, if you can imagine, back in 2005 was sort of a relatively new approach for a health department to think about how to improve the population's health. so, we are often viewed as the clinical action arm of the new york city department of health, and what we really do is, we work - we work with practices, we help them get data, turn that into information, turn that into action, and then disseminate the findings. so far we've been - we have also been selected as the new york city regional extension center so, in addition to working with the 3,000 providers for whom we procured and helped implement electronic medical records, we are working with an additional 6,000 providers who have procured their own ehrs, and whom we have assisted through our regional extension center grant. that gives us a pretty large footprint for, contextually, there are about 30,000 providers in new york city. we've been working with a variety of settings, a lot of small practices, so we have about a thousand small practices,

and when we say small, we are not joking - it is really - one to two physicians make up about 75% of those small practices. we have 31 large practices, 63 community health centers, and we're working with 54 hospital outpatient clinics. in terms of the kinds of services that we provided, for the practices that we helped implement, again, that's the 3,000 providers that we got at rfp, we selected eclinicalworks, and we worked with those practices to help them get live. we did a huge amount of work with those practices, everything from doing outreach and education, in getting them to consider implementing an ehr, back in 2005, and 2006, 2007, before meaningful use and any of the large movements were underway. we helped them with selecting the product, we helped them get group discounts on things like it vendors, and on hardware and other barriers that proved to be, at least, insurmountable obstacles until we helped the practice tackle them. we also learned that we had to get practices ready to implement, and so we did a lot of readiness assessments,

and working with them to even help them find financing for the hardware, because we were not at the time subsidizing the hardware. during the time of the implementation, we really worked alongside of the practices, to help make sure that the vendor was doing what they were supposed to be doing. we helped overlay some some project management. many of the practices, they would forget to return something that the vendor needed, or they weren't really good at sticking to deadlines and timelines, and so we overlaid a huge amount of work, or a huge amount of process, around that. we - for the large practices, we helped them do some pre-implementation workflow redesign. for the small practices, we just did that after the implementation, but for the large practices it was not possible to go live without thinking through that ahead of time, so we did that with the large practices. we connected doctors to each other to convince them that they were not the only one going through this pain and angst of implementation.

and then we also got them cne credits for undergoing ehr training. then they go live, and post go live we worked with them a lot. the first thing we did was help stabilize the practices, offering them revenue cycle optimization, so that they could make sure that their billing had not either been disrupted, or, now that they were getting data on their building, a lot were shocked at what they found, and were, really, unable to engage in qi until we actually helped them understand how to build properly. so that became a pay-to-play to work with the practices. they really wanted to make sure their financial base was stable, and then they would engage with us on quality improvement. i will talk about that quality improvement and ehr consulting looked like, but we did a lot of work with them around patient-centered medical home, and pretty much anything they would engage with us on,

we were really interested in working with them, all with an eye towards improving quality. and then, over time as the practices got to be more sophisticated, we were able to - i say sophisticated not in a perjorative sense, but i mean in terms of the use of their electronic health records - we were able to layer on additional programs, like our pay-for-performance program, which i will talk about today, and some other patient engagement programs. and, working with them on quality measurement, and, as of late, we are working with them on interoperability. so, in terms of - i am going to talk about of a lot of disease areas that we focus on, but the reason we focus on those is because, for new york city, heart disease, lung cancer, pneumonia, diabetes, are really the leading causes of death, and can be largely attributed to, and prevented by, you know, proper use of aspirin in people who need it,

proper blood pressure control, proper cholesterol control, proper blood sugar control, smoking reduction, and addressing obesity. so, as you see in the slides that follow, that is really why we are focusing on these disease areas, as opposed to others that might be also pertinent, but not, from our public health standpoint, not as important as these leading causes of death. so, in terms of what we have seen so far, do we see improvement in the quality of care with providers who use ehrs? absolutely. this slide shows the improvements over time in practices that had gotten the electronic medical record and that we had been working with. this data shows trends from october, 2009 through october, 2011. we included here only the practices that were able to successfully transmit data to us. i should note that the practices, they have their own installations of the electronic medical record, but we built in a process whereby their electronic medical record spit out to us (that's a very technical term, by the way), [laughs]

they basically send us daily syndromic surveillance data, and monthly quality improvement data. i have to be honest, this doesn't always work for every single practice, every single month, and therefore, when we do these kinds of analyses, we have to limit them to the practices who have been steadily sending us data. so this is a subset of practices, i think it covers about 600 providers over this two-year time horizon, who were sending us monthly quality measures scores. we were able to take a look at, the dark black line is the antithrombotic therapy, so this is again, the use of aspirin in those patients who have suffered ischemic vascular disease. the gray line below that is blood pressure control in people with hypertension, and you can see that has gone from 55% to 64% in two years. hemoglobin a1c testing has gone from 46% to 57%, and smoking cessation intervention, gone from almost 30% to 46%. so, we are really excited to see these trends.

for those of you who have been following quality improvement trends, overall the nation as a whole is doing a little bit better, but certainly the rates of improvement are nowhere near what we are seeing here in the pcip program. so do ehrs alone improve the quality of care? no. what we are finding is that we have to do the quality improvement work. it is not sufficient, and i'm sure this comes as a shock to no one, but it is not sufficient to enable practices, even with electronic article records that have a lot of prevention oriented functionality, we really do have to send folks. our boots on the ground team has been the key to making these improvements. so, since 2009 we've been maintaining a team of quality improvement specialists. their job it to educate and facilitate the quality improvement efforts in areas that we choose to focus on. but, often we find ourselves aligning what we would like to do, with what the practice wants to do, and finding some happy middle ground.

we work with them on, like i said, these disease areas. we use the disease areas as a way to demonstrate to them the functionality that's available in their electronic medical records. so, for instance, we know we work with - working them through even a blood pressure protocol you would say, "are you - " "first of all, are you comfortable with the target that everybody with hypertension should have a blood pressure control score of 140/90 or less, or less than 140/90 for people with hypertension, and less than 130/80 for people with diabetes?" and we get them aligned on the goals, and then we work with them on, "ok, how do you screen?" "how do you document?" "how do i identify people who have elevated blood pressures but don't have a commensurate diagnosis of hypertension?" "how do you call those people back in for care?" "how do you run a registry of patients who had elevated blood pressure, who had also a diagnosis of hypertension,

and who don't have an appointment in the next three months, and haven't had one in the last six months?" so those are the kinds of things that we walk them through, as a way to demonstrate how their ehr can actually help address the particular disease area that we are looking at. and then we also work a lot with talking to them about the quality measures. importantly, the ehrs can calculate the quality measures. they get some quality measure reports from us, and we work with them on understanding what those mean, how to think about them, and how to help improve those scores. and so, we have been able to show, on the next slide, and this is a study published in health affairs this year, basically, that for practices who got zero technical assistance visits, their -

we used an aggregate for heta(sp?)'s quality measure score that was designed by larry cassolino. basically, people who had no technical assistance visits from us improved very, very slowly, and actually, they only improved after a long time of being on the ehr. people who got three technical assistance visits saw some moderate improvement, again, but not at a particularly high rate. and then, those people who got eight technical assistance visits really got a lot more bang for the buck in terms of quality improvement. so - i think we understand the difficulty in sending out quality improvement specialists to each and every practice in the us. it's definitely not a very scalable intervention, but it's definitely one that worked well. in terms of our pay-for- performance, did that work to improve the quality of care? what we found is, yes, for many providers, actually.

just briefly, by way of background, we were able to get private funding from the robin hood foundation to pay doctors to do the right thing for the abcs, so, again, aspirin prophylaxis, blood pressure control, cholesterol screening, and smoking cessation intervention. and - what we did, was, we paid. doctors were randomized to either get no incentives or incentives, and then within the incentives, what they got was, the first year they got $20 for doing the right thing for the patient, so if they got the blood pressure under control, they would get $20. we would double it to $40 if it was a patient with medicaid or who wasn't insured, and we would double it again if it was a patient who had comorbid diseases. the idea was to pay more to treat the harder-to-treat patients. what we found, was that in the first year, all the practices that participated in control and incentives,

had some improvement in the abs measures. for some reason, cholesterol did not show a difference in the first year. but those who received monetary incentives had higher increases and better performance. however, what was interesting, is, where we tried to double down for patients who had medicaid or patients who had comorbid diseases, that doubling down did not actually work. we are not quite sure why, yet, but we were not able to - we did not exacerbate any disparities, but we were not able to directly address disparities by paying more for harder-to-treat patients. in terms of looking at the data for this, and we are actually working on publishing this data now, but where you see the as, the bs, the cs, and the ss, again, i noted below what those stand for. but you can see for the control arm, for instance, in aspirin prophylaxis, their score was 54% at the beginning of the program,

and they improved to 60%, but those who were in the incentive group went from 52% all the way up to 65%. and you can see that in all of the abcs, essentially, the increase in the incentive group is much higher than the incentive and the control group. so we do believe incentives can work to improve the quality of care. we also ran a similar program called healthy quits here. we did not randomize the practices. everybody was eligible to receive an incentive. the idea, though, we did not pay for each time the provider did the right thing, what we said is, we looked their baseline and said, for every patient that you get above your baseline score, so, if you were at 50% smoking cessation intervention to begin with, we would not pay you on anything of the 0% to 50%, but if you got to 51%, 52%, etc., anything above your baseline we would pay for. here we involved just community health centers, and over 19 of these practices participated.

so, the first thing that got better, for sure, was the documentation even, just of the smoking status itself, and then, real cessation intervention rates increased from 28% to about 52%. so, looking at the average increase for practices, as well as the practices who were doing well, in the 90th percentile, you could also see that increased as well. and then, when you look at the practices - i apologize for the quirky nature of this slide - but the numbers at the bottom: 15, 3, 4, 18, etc., that is the blinded number of the practices. so you could think of this graph as, each bar is a particular large practice, we de-identified it. and you will note that in the blue was their initial rate of smoking cessation intervention. so, for instance, practice 15 was doing abysmally, because they did not have any smoking cessation intervention that was detectable.

some of that, for this practice, was indeed documentation, but they just weren't making good use of the fields, nor was their leadership focused on this. by the end of the practice they had improved, somewhat, to 16%, which is still low, but we were excited to see some improvement. and then, you can see, for some practices, like practice 16 and practice 13, these practices were, overall, doing relatively well to begin with, about 50% of their patients were getting smoking cessation intervention, and they increased up to 58% and 65%, commensurately. you can see from this practice - everybody did better, but some people did much better, like practices 19 and 12 really went from doing it 6% of the time, to going up to 69% of the time. what we found in this, really, it was important for the practice leadership to be bought in to the process, and to really drive it home. and then, to implement, i think as the previous speaker had mentioned, practices that implemented policies and procedures and reinforced it through messaging, really did a lot better.

other practices just, sort of, didn't really want to engage at the leadership level, did not really talk about it within the community health center, and so you can see, for those practices, not much - some improvement happened, but not much. we have also looked at whether or not patient-centered medical home certification means a higher quality of care, and what we are excited to report, is yes, we do see that patient-centered medical home practices seem to provide better quality of care, in terms of clinical preventive services, which is what we studied, but, of note, they actually started out higher. and so - what we did to look at this study was, over two years we looked at practices who were patient-centered medical home and those who were not patient-centered medical home, and we looked at seven measures. initially all practices had higher - practices that were patient-centered medical home, had higher performance,

and they continued to have higher performance on the quality measures, and, interestingly, we make our quality improvement visits available to everybody. we try to reach all of our practices, but certainly if a practice requests for us to come, we are more likely to go, than for a practice who does not want to engage with us. of note, the patient-centered medical home practices also requested more quality improvement visits. when you look at the scores - and i apologize, we are still in the process of submitting this for publication and so the graphs are not very beautiful, but - the blue line here is practices that reached patient-centered medical home certification, and the line in red represents practices that did not get patient-centered medical home certification.

and what you can see - on antithrombotic therapy on the top left, and bmi, blood pressure control and hypertensive blood pressure control in patients with hypertension and diabetes, smoking status, and smoking status intervention - is you can see that at baseline, for the most part, on almost every single measure, the patient-centered medical home practices start off higher. and their rate of improvement - they improved. the non-patient-centered medical home practices also saw some improvement, and that is because, also, for all these practices, whether they are in the pcmh arm or not, were still doing quality improvement. we have still been doing our other interventions, so it was hard for us to strip out just pcmh as the factor alone, but we do see better care in patients under medical home practices relative to those who are not.

we understand there is a self-selection bias and other biases that at play here, but that's what we've learned so far. we have also been looking at, how do providers respond to performance feedback? in having the quality improvement specialist go out to the practice, they have been able to give verbal feedback, but we wanted a more scalable model to interact with providers, so we created what are called dashboards, where we take the data that they give us - i mentioned before, these practices submit to us monthly quality measures, and as long as we get data that is complete for the practice, we then turn it into a report, and we give it back to them on a monthly basis. we have gotten great response from that, and i think we have learned a lot. practices have told us, like, "we really like to hear from you." "we feel like you are not scolding us." "we feel like you are working with us." that has been really important, so i put in here:

it is important to be not judgmental, and to make a practice feel like you are really on their team, you are their coach. this is an example of one of the dashboards. it is very busy, and i won't spend a lot of time, unless somebody has a question, explaining it. essentially these are spike lines, there are six-month trendlines, we note the practice performance, the straight line shows the benchmark of the average of the pcip practices. for instance, if we look at this a1c testing for this particular practice, their six-month trend is variable but then has recently plunged to 46%, and that puts them below the average pcip practices, where 50% do a1c testing. and actually, when we saw this for this particular provider, we called them, and said, "what the heck happened?" "you guys were doing so well, you were on an uptick, what happened?"

it turned out they had installed and implemented a1c testing at the point of care, but they had not link-coded their a1c testing. so, they actually were not getting credit for any of the testing that they were in fact doing. you can see for blood pressure control and hypertension, for instance, this practice was at 70% blood pressure control, compared to the average at the time, which was 58% for these practices. so, we've been giving them that data. we also give them some interesting other data, like the syndromic surveillance. they don't find that very interesting, but we do! [laughs] and then, we also give them a sense of how they are billing. we give them their cpt codes for all new and established patient visits. it is a way for us to engage with them on best practices and billing and coding, because a lot of the doctors, we have found, don't understand coding at all. so, this is our initial attempt at giving them back at least some of the measures that they were giving to us.

we tried to highlight in green the measures that ended up being continent with meaningful use measures, so while we were handing out these dashboards we could talk to them about the impact, and the overlap between these and their meaningful use measures, on things like, for instance, e-prescribing, which the practices were historically not doing very well at. so, we have found that the providers really like this. we send these out electronically, also the quality improvement specialists bring them to the practice and remind them they are getting these in their e-mail, and some of the feedback that we have gotten from practices are, for instance, "when we saw current medication reviews in red i went to the doctor to make sure we were reviewing all meds." so the practice administrators got this, and went to the doctor to explain to them how to - that they needed to really focus on current meds review, which, as many of you know, is one of the meaningful use measures.

another comment that we got back was, "i see that dr. y isn't doing e-prescribing. if you look at what you need to improve on, you will be more cognizant when patients come in, and improve on that." that was actually, the lead practice gave us that information on one of his doctors. he went and spoke with his colleague, and it reinforced the need to be e-prescribing, and so that provider went from 24% e-prescribing in 2010, to, several years later, unfortunately, to 57%. so we know this anecdotal information, which is what the practices have been telling us, and then we also have done a little bit more of a rigorous look at the dashboard. if you look at the average performance in the dashboard, of the average performance of the practices before they got the dashboard, we looked at the top quartile and the bottom quartile, and you can really see that they varied,

like a1c testing - the top quartile was 65%; bottom quartile was 27%. so we have a fair amount of variability in terms of quality of care that's being provided, documented - or rather, documented and provided. when we started to give them their dashboard, we took a look - i believe this was nine to twelve months after we started giving them their dashboard - and you can really see that the high performers, the top quartile, and the bottom quartile, improve, and, importantly, the bottom quartile improved much faster than the top quartile, so - for us this was an indication that we had really stumbled on something, that really, practices who were getting information back, and we could engage with them in a helpful dialogue around their quality improvement, were really willing to increase their performance. which, i think we have always said, doctors, and practices, are really well-meaning, and they are trying to do their best, but sometimes they feel like they don't have the tools,

and so by bringing some attention to important areas of focus and working with them on helping to improve, we really can eke out pretty significant quality gains. so, overall, we've been looking at, this is just another depiction of the same data, looking at, we started sending out the dashboards in october, 2010, which you can probably just tell from looking at these graphs alone. for practices that were above the benchmark before they got the graph, before they got the dashboard, they improved, but those in the orange, which were below the benchmark at the time of the first dashboards, you can really see the uptick graphically there. in terms of what we are working on next, now we're trying, this is obviously public health integrating into primary care, in a way that hasn't really been done by a lot of health departments, so we are constantly trying to explore the bounds of what we can do, how much will be too much before the practices say, you know,

"this is the health department, you've got to get out of our business." so we are trying to understand a lot about how we can really partner with the primary care setting without annoying them to pieces. we are also trying to understand how we can have access to additional data that we can then report back to the practices, because we firmly believe that, if we give them the data, and we give them recommendations, that they are willing to work on them. so we are looking at medicaid claims data, to be able to look at things like preventable hospitalizations and other, more comprehensive quality measures beyond what is in their ehrs alone. we are leveraging our diabetes a1c registry in new york city. a1c is a reportable event to the health department, so we are trying to think about how we can use that data to help providers, and also looking at how we can eventually use health information exchange data to give information back to providers.

we want to keep a continued focus on cardiovascular diseases, and we are going to work more closely now with just blood pressure and cholesterol, and see if we can make a dent in those beyond what we've already done. we're also playing around in the mental health space, and trying to understand: are there similar evidence-based prompts and guidelines and data that we could get back to them, the provider community there, that would help them improve the quality of care they are providing and help them improve their care coordination. we're in lots of discussions with acos and payers, to understand how what we are doing can help them, as long as they're willing to use it in a way that helps their providers. so we are working on that. and we're also, overall, just thinking about how to use this data in a scalable way to improve quality, and also starting to explore whether or not we want to look at how we can have an impact on cost,

and thinking about things like reducing redundant testing, or making sure that providers have all the information they need to make the right decision, for instance, not admit a patient that does not need to be admitted, just because the provider did not have access to the ekg that shows that the finding that was seen today in the er is actually an old finding, not a new finding, and so we are trying to understand there are ways that we can impact that. lastly, we're also trying to posit some thinking around: what is the validity of quality measurement out of electronic medical records? we published a paper last year, in jamia, that looks at the validity of quality measurement. what we found is, on things like body mass index, and aspirin therapy, and vaccinations, essentially, what you get out of the ehr is really what is happening.

but if you compare, for instance, mammography screening - you pull that out of the ehr from a quality measure, and then you compare that to an e-chart review, what you find is, there are many areas where doctors don't actually do a good job of documenting, or they don't get the information back, so mammography screening is a great example of that, where it looks like the screening rates are something like 15%. that is because we consider, in the quality measure we say, of all the people eligible for a mammogram, how many of them had the test ordered, done, and resulted? a lot of doctors tell us they don't actually get the mammography results, or when they get them they get them by fax and they stick them in patient documents, and they don't actually structure the data in a way that makes it detectable by the quality measure.

so, we've been doing a lot of work to try to understand: how is it that people should digest these quality measures that come out of the ehrs? and importantly, how do you feed the data back to providers so they can understand, for instance, where they are recording smoking status? many of them, only 50% of the time did they record smoking status in a location that was detectable by the quality measure. so, they have a 50% drop in their performance just from documentation problems alone. we are looking at additional quality measures and trying to understand what it will take to get them to the point where what you get out of the quality measure is a true reflection of what is actually happening at the practice. so that, in a brief nutshell, is what we have learned so far from our project, and and i am going to turn it over now to our next presenter, but i will be happy to take any questions that you have. thank you. [dr. white:] thank you very much, amanda.

as i presaged for our audience, very detailed, and satisfying, and a lot of great stuff happening there. there are a few questions that came in from the audience that i do want to ask you directly. just a couple. the first one was, do you have a control population for the areas that did not implement an emr? i'm assuming by "areas" that probably means "practices," although it might mean location. so, certainly for some of the incentives, right, you presented control populations, but for the emr implementations, did you have control groups that you were able to compare against where they did not have emrs implemented? [dr. parsons:] unfortunately, no, we did not have - we didn't get out our ehrs in a randomized [laughs] clinical trial methodology. it was basically a first-come, first-served, coalition of the willing at the time.

what we are trying to do, is look at - but we do know, obviously, who has an electronic medical record that we gave them, so we can do comparisons in some of the data sets, like the [--] medicaid claims data sets, and we've been working with the state to do comparisons. unfortunately, we can't tell if they have an ehr at all. for instance, if it's a practice that is not known to us, we know we did not implement their electronic medical record, but we have no good way of knowing whether or not they actually have an electronic medical record at all, so we really would be comparing practices who are in our program versus are not in our program, and not so much the more interesting question, which is, for those practices that got an ehr versus those practices that didn't. we are constantly looking at different feasible opportunities to do that, but have not yet come up with a good, robust mechanism for doing so. if anybody has any ideas, please feel free to e-mail me, we are really open to that.

[dr. white:] you know, it's funny. for a couple years now here at ahrq, i have said, we really need to have somebody - i hope somebody will submit to us, a two-by-two study, you know, two-by-two box where one side of the box has health it, the other side doesn't have health it, and then, the other axis of the box is high-quality care and low-quality care, so you could compare and contrast between them, and try and figure out why each one sets up the way it does. so, i thought that might be the question. a few questions came in about the results on slide 46. elizabeth, it looks like you have the ball, could you go backwards one slide just a second here? ok, there we go. so a couple questions, karen said, can you share the citation for the study on ehrs and quality measures from slide 46?

it looks like a few folks have asked that. i think you mentioned jamia, do you have anything more specific? [dr. parsons:] yeah, let me pull - what i will do is, let me type that into the - i will go in and find the specific citation, and i'll type that into the text box. [dr. white:] ok, perfect. i want to ask you two more quick questions. i appreciate you doing that off-line because we do want to get to elizabeth here in a second. venita(sp?) asked, i noticed on the slide for providers who were incentivized, that the groups labeled mk, which is probably medicaid, that the results were in some instances less than the control group. she was wondering why. just didn't know if you had any insight you might offer to that?

[dr. parsons:] let me turn to that slide. so the question is, whether or not - the increase is less, was that the question? [dr. white:] it says, the results were in some instances less, so the control group - [multiple speakers]. [dr. parsons:] it is a great question. i think a lot of data has been - a lot of research has been done to show that patients who either have medicaid or are underinsured, are from a lower socioeconomic income, and those who have more comobilities, are harder to treat, so that is why you see them starting off at a baseline that is lower than patients who are not medicaid patients, or who do not have other comorbidities. they have been shown, so, not only to have a higher burden of disease, which would explain the baseline, but also that they are harder to treat, for everything, from, there are more barriers to them actually being able to access care,

and to actually fulfill their prescriptions, and to have higher rates of medication adherence, etc. i think that that explains also for us the lower improvement in performance, they are just harder-to-treat patients. essentially what you are looking at, there, is health disparities measured - [dr. white:] made manifest, right? [dr. parsons:] [laughs] made manifest, agreed. [dr. white:] "risk adjustment" is the phrase that comes to mind, but i think that is really important, though, for folks who are digging into this stuff. we look, and sometimes we'll see that things are kind of counterintuitive. so there is one more question that i am going to actually take a quick stab at myself. it came up for your talk, but i guess it might be applicable to the others.

the question, from diane, was, is the definition of "quality improvement" the actual improvement in patient outcomes, or that the measure was appropriately documented? and i think, based on what you said, and what others are going to talk about, i am pretty sure that when you talk about quality improvement, you're actually talking about improving quality, you're talking about the right things happening more often, etc. i think what you find, when you start measuring these things, is that you've got to measure it based on something, and unless you're going to sit there and have somebody check off a box every time the right thing happens, you've got to go to the records, and, sometimes, what you are looking for is not there in the records, so - quality improvement - even though when you get them to document better, quote unquote, the quality improves. it is not actually the quality improving, it's just that you're doing a more accurate measurement of it.

does that sound correct to you? [dr. parsons:] yeah, i think some of the measures are more susceptible to documentation [multiple speakers], mammograms, smoking cessation, but things like blood pressure, you know, there are not really that many places within the electronic medical record that you can put a blood pressure. usually it's in the blood pressure field, and there are not usually multiple versions of that. so the quality measure tends to be pretty accurate, and so, for a blood pressure control measure, people have said, isn't that just documentation? no, actually, i didn't go into detail, but i did just send over the citation. if you look at the paper, we show that, really, blood pressure is put in the right place 99.9% of the time, and so what you're looking at is actually a true patient outcome - unless you believe the documented blood pressure is not correct, in which case, it would not be a reflection of the patient's health outcome.

[dr. white:] sure. okay. if you wouldn't mind, while you are listening to dr. alpern talk, if you wouldn't mind taking a look through the questions that came in, and see if there's some that you might want to answer, i would love it. great talk. i do want to make sure that elizabeth has enough time, so - let me introduce to you, dr. elizabeth alpern, m.d., m.s.c.e. dr. alpern is an associate professor in the department of pediatrics, and director of research in the division of emergency medicine, at the children's hospital of philadelphia, the university of pennsylvania, fondly known as chop. when i was up in lancaster, it was the big house where you found the really sick kids. dr. alpern is both a pediatric emergency physician and clinical epidemiologist, and has particular interest and experience in the use of large databases within a research network,

and improving the quality of care delivered to children in the ed. she has been a member of the pecarn steering committee from the network's inception in 2001. she is principal investigator for an ahrq-funded project to develop the pecarn emergency care registry, an electronic medical record registry with the goal to improve quality of care using data derived from this registry. dr. alpern is also the pi of the pecarn core data project, pcdp, a database of over 6 million visits to all sites within the network for the past 12 years. dr. alpern, you already have a question, kind of queued up for you. lynne has asked, are there any pediatric-based studies? ehrs are notoriously not kid- or prevention-friendly, and many of these conditions have foundations in childhood. so, i am looking forward to you trying to talk about that a little bit in your talk, so, take it away, dr. alpern.

[dr. alpern:] thank you, dr. white, and thank you for asking me to participate in this webinar. lynne, i hope i will answer somewhat of your question. i would like to acknowledge the support of ahrq for this project, and also the pecarn network infrastructure, as it is supported through hrsa, mchb and emsc. i would like to take a moment to point out the rationale and impetus behind our project. as noted in the iom report, emergency care for children: growing pains, the emergency care provided to children nationally is highly variable. at times it is really outstanding, and at other times significantly lacking. this variability presents an opportunity for improvement. in attempting to enact this improvement, it has become evident that basic administrative data is really severely limited in providing adequate reporting of performance measures needed to assess and improve the quality of care provided to our children.

we identified the possibilities introduced by recent and ongoing advances, particularly the wealth of patient-centric data available in electronic health records, used within emergency care settings, as well as technical advances, such as natural language processing, to help explore that wealth of data. we believe this combination may provide an opportunity to harness and strengthen the ehr data, measure the quality of care, and impact change. there were three overall aims of our project. the first is, develop an emergency care visit registry for pediatric patients. the registry is merged from electronic health record clinical data from different hospital emergency departments, with differing electronic health record data sources. this new registry will serve as the foundation for current and future studies, obviating the need for resource-intensive chart review.

the second aim will use this registry to collect stakeholder prioritized emergency care performance measures, for important pediatric medical and trauma conditions at the level of both the emergency department and the individual clinician. using performance measure data, we will develop achieveable benchmarks of care to promote performance improvement. the third aim will use this registry to report performance to individual department clinicians and sites. we will test the hypothesis that providing regular performance measure feedback will improve performance, and decrease variation, among emergency department clinicians and sites. i will now give a brief overview of the entire project, and then will focus on particular successes and barriers that we have encountered to date. as you can see in the schematic of an overview of our project, starting on the upper left-hand side, we are taking electronic health record data from each and every patient emergency department visit from four sites within the pecarn network.

each of these sites has a main and a satellite emergency department, which will contribute data for a total of eight study sites. the visit data is transmitted monthly to the data coordinating center of pecarn, which is at the university of utah. and i will review the transmission and the identification process of the data in detail in a few moments. this data then forms the pecarn registry. performance measures are derived from the data within the registry, using both natural language processing, or nlp, and directly from distinct data variables. these derived performance measures will provide the information to build automated site- and clinician-specific report cards, and our hope is, that in providing regular performance measure feedback, we can improve performance, decrease variation among ed clinicians and sites, using a time-series study. we are currently approximately two-fifths of the way to completion of our project, so i will go over what we have already completed, and also our future plans.

the sites involved in the study are listed here, and as you can see, we are involving four geographically distinct sites, with three different ehr vendors. we do, however, have some overlap between the vendors, and i will speak more to the pros and cons of particular aspects of this in a few moments. i was asked to review successes and barriers within our project to date, and i think that it's important to point out the components of the process that enabled our successes, even prior to the start of the registry project. the pecarn network has a very strong and successful infrastructure that has benefitted this project greatly. in particular, the network has been running the pecarn core data project, as dr. white mentioned, or the pcdp, for the last 10 years. the pcdp is an extant administrative database derived from all sites within the network, including all ed visits to those sites collected on an annual basis. this process has been successfully led by the investigators involved in the registry project,

and has allowed for prior collaborative projects between these investigators, the sites and the data center. it has given us the scaffolding to start our extensive work with the ehr that has led to the registry project. another previous project of the pecarn network and the investigators of the registry project, was to develop stakeholder-endorsed quality performance measures that comprehensively reflected pediatric emergency care. a selection of these performance measures are utilized in the pecarn registry project. the website provided on the slide has all 60 measures available and defined for open use. members of the pecarn registry have also utilized natural-language processing in prior research. nlp and free-text parsing allow data from the discrete data and elements, and the narrative component of the ehr, to be more fully utilized. however, medical narratives have both helpful components, such as the recurring use of stereotypical phrases, such as "alert and oriented,"

that can be exploited by natural language processing. there are, though, also difficulties introduced by typical medical writing, such as the unpredictable use of negation terms and punctuation. for example, "no vomiting comma fever" is very different from "no vomiting period fever" prior to our project commencing, we did an nlp pilot as a test of concept, using one of the projected performance measures. to evaluate the presence of seizure activity on arrival to the emergency department, which will ultimately be used to determine the quality measure timeliness of treatment with antiepileptic drugs for patients with status epilepticus. we extracted a sample of records enriched in cases with seizure-related diagnoses. a natural language processing algorithm to classify the presence or absence of seizure activity on arrival to the emergency department, was compared to administrative data determination, which used only triage acuity level and discharge diagnosis.

the gold standard was manual chart review performed by two investigators. as you can see here, the nlp was better in identifying patients who were in status epilepticus on arrival to the emergency department. we were encouraged by these results, which suggested that even modest natural language processing efforts can markedly improve the precision and usefulness of the ehr data. in addition to the prior two preliminary processes, once we started the pecarn registry project we were able to further enhance our productivity by relying on previous work. as i will illustrate a few moments, the pecarn registry data variables are quite inclusive, and contain all the limited previous pcdp variables. this allowed us a natural experiment of sorts, to assess the proposed expanded data transfer process at the registry, with the previously validated pcdp elements.

we compared the 2012 pcdp data to select pecarn registry variables extracted and transmitted to the data coordinating center, utilizing the new xml transfer and xsd verification schema. from this we learned to allow the same vendor sites to share the programming burden, and used an open source tool to extract, transmit, and load the data. this also clued us into the added burden for the sole vendor sites within the project. we are currently building the registry with the 2012 data delivered from each site, and planning ongoing monthly submissions. the process that we have developed for the registry build is to extract one day of data from each site, which is transmitted without de-identification to the data coordinating center. this has allowed for a robust and site-specific de-identification procedure to be derived in a joint venture between the data coordinating center, investigators with de-identification expertise at the center for biomedical informatics at the children's hospital of philadelphia,

and technical support from investigators at each individual site. we then extract and de-identify one month of calendar year 2012 at the site, to allow for quality assessment, as both the variability of the variables, and also as a de-identification process. we are currently completing this quality assessment, and will then move on forward for transfer of the entire de-identified calendar year 2012. to allow for the data extraction, the registry investigators developed an explicit data dictionary for the definition of all variables, as well as the location and identification of those variables within each site ehr. as you can see from the next slide, the pecarn registry variables are quite extensive. they include full demographic and patient-centered ed visit information, each with a date and time stamp, such as vital signs, orders, and therapeutics, including medication, clinical assessments, narratives,

radiology and lab orders and results, procedures, diagnoses, and disposition. the performance measures, which will be populated with data directly from the registry, are listed here. they include both general and cost-cutting measures. those on the slide are more within the site purview, such as time to diagnostic image availability, and as you can see from the next slide, there are several that are more within the locus of control of the provider. nlp, in concert with discrete variables, will be used to derive the measures as indicated here. i would like now to describe several of the successes and barriers in detail that we encountered to date. due to the extensive nature of the data needed for the registry, and the combined human subjects concerns of both patients and clinicians involved in the study,

we have been very excited by our consistent successes in gaining irb approval at all sites. this process was certainly helped by early and ongoing discussion of the project with irb chairs and hipaa officers at each site. we developed and rigorously held fast to a comprehensive, single multi-center irb protocol, which was reviewed prior to submission by select irb chairs. in addition, this process was facilitated by existing relationships, including standing business association agreements between the sites and the data coordinating center. the barriers we encountered included the sheer amount of phi involved, and the desire to keep the risk of disclosure to a minimum. this meant significant, time-consuming quality assessment of the de-identification process. we learned that there is an ongoing interplay between flexibility and security, and that amendments may be necessary to keep up with the required technical changes in the process.

we also noted that within the full ehr, including all narratives, reports and results, phi can live in some very odd places, such as the lab result documentation indicating a callback to a particular clinician. as i pointed out previously, this project is depending on evolving technology and those investigators that understand these methodologies. i truly believe our successes in this component are due to leveraging a strong, previously existing collaborative group of individuals within a dedicated and supported national network. this allowed us to ramp up quickly, build upon prior pecarn work together, including harnessing expertise and past successes in database design, development, data transmission, and ehr utilization. we were able, across the sites that shared an ehr vendor, to work as a single team with shared experiences and expertise, to allow for data extraction and submission together.

however, even within those sites utilizing a shared ehr, we learned that the rule is the exception, and that each ehr is customized to the workflow of that individual site, and, perhaps even worse, is subject to change at any moment, truly proving the point that the only thing constant is change. the sites without an ehr buddy within our project had a concentrated workflow, and even more difficulty was encountered at the satellite site without active academic technology support, as the workflow of this site is significantly different from others in both availability of personnel and institutional support. our extensive data transmission was enabled by utilizing a scale-up process that started with a single day of data. this small bite was perfect for allowing quick workflow that was responsive to changes needed, indicated by the rigorous quality improvement assessment. we developed extensive, automated data quality reports that are produced each time data is submitted, and then resubmitted,

that allows for quick review by all investigators. modifications needed were identified by the report, and the limited day of data allowed for facile changes in the extraction script. this iterative process was planned for within the grant timeline and fully utilized. however, as there are two sides to each coin, this iterative process was highly successful, but also costly in terms of personnel and computing time and energies. we are also cognizant that ehr changes occur at all times, and that a quote unquote perfect single day of data may not generalize to a perfect month or year of data. so we need to be sure to continue to move forward within the iterative process, as it is possible to fall into the perfection time-hole sink. here is an example of one of our quality reports.

as you can see here, to map even a more simple term of discharge disposition from the emergency department, we had to map variable expressions within and between sites, and that even within one site's ehr, the term "discharge" isn't so simple. if a patient is discharged directly from the ed, their disposition is "discharged." however, as you can see here, if they are admitted to a 23-hour observation unit, and ultimately discharged from the observation unit, their ehr disposition is also "discharged." we needed to determine how to properly identify this subgroup, ed disposition, as observation. as i pointed out, de-identification of significant amounts of phi is necessary in the construction of the registry. our process for this has been enhanced by exploiting both centralized and decentralized at-the-site expertise,

utilizing a computer program to allow for extensive de-identification. this process leveraged our past experiences and knowledge of each type of ehr, and resulted in our ability to deal with all phi and date shifts for de-identification. we have noted, though, the tension between rigorous de-identification needed and the possibility of over de-identification, making narratives uninterpretable. for example, to use a ready example of dr. white's name, if the de-identification program identifies the name white, as in dr. white, there is the chance that white matter in the brain will also be de-identified if the program is not appropriately modified. in order to balance this tension, we have instituted a de-identification quality assessment that is both labor-intensive, and will need periodic checks due to possible ehr changes. [dr. white:] i think it turned out just the way we planned it.

[dr. alpern:] [laughs] at this point out there, we'll be heading into the remaining component of our project, and i am sure we will find new successes and barriers as we move along. i will quickly go through this component, as i see we are running a little short on time. what we are planning to do is, from the calendar 2012 data, we'll be deriving achievable benchmarks of care for each of the performance measures, and we will hold an expert panel discussion to identify ideal benchmarks, as they may differ from the mathematically calculated benchmarks. we will be producing automated clinician and site-specific report cards, and the clinician reports will be anonymous to all but that particular clinician. we have used input and experience from one of our investigators from another ahrq-funded grant to help design our report card. we will have monthly ongoing transmission, with distribution of report cards, and assess the effects of providing this feedback on a site and clinician level.

our overall goals hearken back to the rationale of conducting a project to evaluate our care of children in the emergency department, decrease variability, and also improve the quality of care they receive. however, the registry we are building to conduct this project has a wealth of data, and will be a powerful tool in the future for ehr-based comparative effectiveness work. as i previously said, most, if not all, of our successes are due to an amazing team of researchers within the sites and the data center, and i truly want to wish - to thank each of them for their contributions. thank you for the opportunity to speak. [dr. white:] superb. well, i promised you three really tight presentations packed with information, and that is exactly what you got, that was pretty amazing. so, listen, a lot of really -

elizabeth, i don't have any specific questions for you, i think, at this point. here's what i'm going to say - it's 2:55, we have just got a few minutes left, and a lot of really good questions have come through on the transom. we're not going to have time for all of them, so here's what i'd like to do. i'd like to ask one of them that's kind of relevant to all of the rest of you, and i would encourage the participants who've been listening to - the presenters have been really gracious in sharing their e-mails with you on these slides - that if your question does not get asked, take the opportunity to reach out to these folks who, really, are on the cutting edge of doing this. they have been working hard at this for several years, so you should take the opportunity to tell them thank you for the presentations, as well as ask them questions.

if you want an answer to that question that doesn't get asked, please reach out to them, and ask them the question, and they have graciously agreed to be available for you. the one question that i do want to ask you all, you've described a lot of great work, and as elizabeth's slide just showed, it takes a village. there's a lot of folks that are involved with doing this. so the question that came to us from cynthia is, have you thought about teaching quality improvement specialists at each site? for instance, super users, who can be on-site employees. lynne offered a quick response that it would be great for practices to appoint super-users, but i want to give you all a chance to describe in a little more detail any efforts you may have made along those lines, or other efforts, like the extension program run by the office of the national coordinator, that you may have experience with.

any of you that wants to answer, would be wonderful. [dr. nemeth:] well, i'll just take a stab at it. this is lynne nemeth, and we have not gone through any of those other funding mechanisms to appoint super users at each practice, but we find that there is a natural selection process that starts to happen in some of the practices we've worked in, where unit- or practice-based liaisons start to rise to the top and start assuming that role of being a super user who teaches others. it would be great to have that as a designated part, but that is a costly piece of a practice's data, so not all practices can afford to do that. [dr. white:] amanda or elizabeth, any experience with this? [dr. parsons:] for the small practices, it's been really hard to identify who that person would be, because in general we find it's one or two doctors and then a more entry-level receptionist person,

and the doctors often don't have the time to be the quality improvement specialist, so i'm not sure we have a good solution for embedding this at small practices. certainly, at the community health centers, we work very closely with their teams. they often have one or more qi folks, and so we do work with them. we do collaboratives with them, we engage them on the data, so they are very much a part of our train-the-trainer model. we are not formally responsible for training these people in qi, but we do train them on the methodologies that we use to improve the quality of care, and so that is in addition to whatever training that they get. i think it is a big opportunity nationwide to think about how to really scale up the workforce that's needed

to use health it as a methodology for improving the quality of care, but so far that has really been done, i would say, rather haphazardly on our side, [laughs] really the - working with the coalition of the people who are willing to work with us. [dr. alpern:] i just briefly will say we were very lucky that we were working with qi experts already within the project, but did reach out to each site, and, especially the satellite sites' administration and qi experts, they are most interested in this data and can foresee that, despite the significant work it involves, the payback that they'll get exceeds that, and so were very supportive of the project. so, i do think tapping into what is already there has helped us greatly. we haven't done a grassroots project in growing.

[dr. white:] good answers from all. on that note, we are at 3:00, so i want to thank everybody for participating in these three excellent presentations. thank you to our presenters. for those of you who, like me, who like finding that rat pellet at the end of the maze, we do have cme and cne credits available. this last slide tells you that you can get up to 1.5 contact credit hours, and, a link to the online evaluation system will be sent within 48 hours after the event. so, thank you very much everybody, thank you, doctors, for being online with us, and we hope you all will look forward to the next ahrq health it teleconference coming up in the near future. thank you, everybody.

No comments:

Post a Comment