Saturday 31 December 2016

Nanda Nursing Diagnosis For Depression

what i'm going to talk about for just a few minutes this morning is sort of why we made the changes we made and how we went about identifying what was important. i'm a geriatrician. i take care of patients, and i'm also a researcher. so what we tried to do was bring together clinical knowledge and understanding with the science that we know about assessment. when we started this project, we said, you know, change is really hard, and we don't want to just make people change just to change. so what we said was, "what are our goals going to be?" and these are the goals that we think we've accomplished with mds 3.0, and the evidence that i'm going to show you hopefully will support that.

the first was that we wanted to be sure this instrument gave resident voice. and you heard from mary this morning about resident voice. we wanted it to be more clinically relevant so that it wasn't just a form that everyone was having to fill out. we wanted it to be accurate. and we defined accurate as being more valid, really measures what we wanted to measure, and more reliable, that two people looking at the same resident would come up with the same assessment values. we wanted it to be more clear. and finally, we wanted to reduce the burden on folks that were completing this form. and in our study, we found that we reduced the time to complete it by 45%.

so why resident voice? well, you heard mary talk about what a high priority this is for centers for medicare & medicaid services. it really conveys that we respect the individual's voice, and it's fundamental to us trying to improve the quality of care that we're giving to individuals and to changing the culture of our facilities, which many of you already are doing. residents and family want their care to be individualized. if your people have asked, well, do they really want us to go in and ask all these questions? and the answer really is yes, they want you to know what's going on with them, and they want you to be coming in and getting specific and concrete information. we found in our testing that it

actually increases the accuracy, the feasibility, and the efficiency of doing these assessments. because we know if, you know, you just walk in and you say to somebody "how are you doing today?", you usually get a "fine". these general types of unfocused questions fail to elicit important clinical information that we need to manage folks' care. and then finally, you know, if you look at the current 2.0 manual, what it really is asking you to do to complete a lot of these items is to do daily detailed observations across all shifts for all residents. and we know that it's incredibly difficult to do in the busy clinical environments that we're all trying to just keep up with what we're doing. and so

what we're hoping by doing the more, the interview focus is that you go directly to the source, you find out what's going on. and then that allows you to do these more daily detailed observations for those patients who cannot self-report to you. so you can be focusing your resources in terms of daily observations on those folks. so how do we identify and test these advances? we were supported by a huge team. i led the team. the rand corporation was the contractor for the centers for medicare & medicaid services, harvard also joined us in the project. we also had a national va nursing home research collaborative with clinicians from across

the country. we were supported by the quality improvement network and community nursing homes. i know one of our gold standard nurses is here. would all the nurses who participated in the study, are any of them here today? one, stand up. yeah. thank you, guys. they were wonderful to work with. [applause] the centers for medicare & medicaid services has been a wonderful team. mary talked about managing people. she was tasked with the job of managing me and doing this project to begin with. and then we were supported by a wonderful group of workgroups, consultants, content experts. whenever i called someone and said, "we're working on the mds, can we get your help?", the answer was immediately

"yes". people did not hesitate. they really knew how much work you guys were putting into it and wanted to try to make it better. i saw rena was here. she may have stepped out. are any of our other content experts or consultants or expert panel members in the room today? they're around and about. okay. so, really we went about this in four phases, and the reason i'm going through all this is because you're going to be the spokespersons going back to your facilities and saying, "look, they didn't just make this up. they really tried to think about and work on what the best changes were." and i want you to understand that and not just be given what the changes are to take home. we

started with a town hall meeting. we had over 1200 written comments that came in, separate individual, written contents that we did a comment analysis of. we assembled groups of experts with experience in nursing home care. we then took that information and went into a pilot phase where we tested out different types of items and responses to see what was going to work with actual nursing home patients. we integrated it together. we brought back content experts to take a look at the instrument and tell us if these changes made sense together. we developed forms and instructions, and we went to national testing. then we did our final, based on that national pilot testing, we then made another round of

revisions to the form and went out for our larger national study. and then we analyzed those results and made recommendations to cms. the va pilot phase focused on some areas that our content experts told us were particularly important. stakeholder groups and content experts said these are the areas that really need the most work: mood, behavior disorders, mental status, delirium, pain, falls and balance, quality of life or preferences in diagnostic coding. we also worked on some other sections with content experts: ethnicity, language, bladder and bowel, pressure ulcers ,and swallowing disorders. for our big national test, we went out to 12 states, 91 nursing homes

in 12 states. and we worked with over 4500 nursing home residents, 3800 of whom were in community nursing homes. and the way we set the study up was that we had different types of data collectors for the mds. each state had two gold standard nurses who served as the trainers for the train-the-trainer model. and then each nursing home had their mds coordinator in charge of being trained and collecting the data. that mds coordinator could go back and deputize other folks in the facility such as the dietician, social worker, to collect different segments of that instrument. what we tested was something called reliability, which is just if two people are

looking at the same event, do they code it the same way. and we looked at it in two ways. we looked at gold standard to gold standard. we figured if two nurses that we trained that had nothing else to do but to collect this form couldn't agree on the item, it wasn't a very good item. or we really messed up the instructions. the other way, though, that we thought was really critically important before we put this form out there for you guys to use was to look at how nurses in their day-to-day or nursing home staff in their day-to-day practice completed this form. so we compared actual nursing home staff members' assessments to those of the gold standard nurses, figuring that this told us how

it happened how things worked in real life conditions with actual facility staff. we also looked at the validity of some of the items. are they really measuring what we're trying to measure? we thought, again, we talked about burden. we thought it was really important, so we looked at the time to complete the assessment. we had the data collectors write down, the facility staff write down all the time as they were spending it on the form. and we did all start and stop times. we also wanted to know if it worked for people, because this is a big investment of your time and facility resources. so we did the survey before we

started of the participating facilities to get their attitudes about 2.0. and you'll see some of that in my subsequent slides. and at the end we did an anonymous survey, again, of those facilities to get everyone's written feedback about what they thought about the changes in the form. we wanted them to tell us whether it was useful and whether it was clear and easy to use and whether they were satisfied overall with the changes that were being made. the 2.0 -- we also collected the 2.0 at the same time so that we could do payment crosswalks and quality measure crosswalks. so i'm just going to highlight six of the sections that had major changes in

them and that you're seeing in your forms today and you'll see in your trainings as we go forward. the first is the cognitive assessment. what you're seeing now is the new brief interview for mental status, which we're calling the bims. and it's a new structured interview that's going to replace staff assessment for all residents who can make themselves understood. the staff assessment for mental status is the old 2.0 item, but it's only going to be completed on those folks who cannot be interviewed, who cannot complete the brief interview for mental status. you'll also see the validated confusion assessment method or the delirium assessment in your new form, and it replaces the old delirium items.

so, why are we asking you to make this change? well, the old cognitive items, providers were sort of uncomfortable with sort of the observation-based scoring. long-term memory okay, and short-term memory okay, some staff said to us we don't really know how to do okay. and the other part was that when we go to convey that to other providers in other settings, they're like what does this mean? only 29% of the staff that we surveyed going into this thought that this section was easy to complete. you are instructed in the 2.0 manual to do a formal assessment, but no one gives you that formal assessment nor do they tell you how to crosswalk that formal assessment into the form. and finally, there

are validated scales based on the 2.0 called the cps and the caas, but most nursing home staff did not find them very easy to complete, to come up with a score. the new cognitive item directly tests domains that are common to most cognitive tests in other settings: registration, temporal orientation, and recall. and we'll go over those more when we get into detail on this section. and it gives partial credit. we sort of tailored it to the nursing home population to give partial credit for those close answers and responses to prompts, to make it more relevant to your care planning and to the patients that you're taking care of. it also is important to have this structured cognitive

test because it will support your delirium assessment. it really improves your ability to pick up changes in cognitive states, to be doing a structured cognitive task with someone. why did we change the delirium items? delirium, as you know and you've heard, is a serious condition associated with increased mortality, morbidity, cost, and institutionalization. the old delirium items, the agreement between two different assessors looking at the same patient was worse than chance. you could have tossed a coin and gotten better agreement than you got when two people were using those items. we also found that other researchers going into nursing homes

and looking at delirium found that we were doing a really inadequate job of picking up folks who had actual clinical delirium. the new items the cam, confusion assessment method, they're based on a validated instrument that's being used currently in hospitals. and it's been cited as the appropriate tool for screening for delirium by numerous national organizations and international organizations, including, for example, the royal college of physicians, the ncqa. it's also in a lot of guidelines. what it does is it improved our sensitivity -- how good we are at picking it up -- and our specificity, our accuracy, at picking up delirium.

so what did we find when we tested this in the large national study? we found excellent agreement between the facility nurse and the gold standard nurse. now, this slide always reminds me to say to folks, we were really surprised. we thought going into this study that we would have higher agreement between our two gold standard nurses than we would have between our gold standard and our facility nurses. what we actually found was in some cases, we had better agreement between the actual nurse working in the facility and the gold standard nurse than between the two gold standard. but they were very close, much to our delight that this was working. the score i give you here is called a kappa, just

for informational purposes is up here. the higher the kappa, the better. a kappa of 1.0 is perfect agreement; every time people look at the same patient, they get the same answer. anything less than .5 is considered sort of worse than chance agreement. the completion rates were high. so one of the concerns folks have said is our nursing home residents can't do this. we approached every resident who was scheduled for an mds assessment; the only exclusion was folks that were comatose. so every resident that was eligible for an mds assessment, 90% of those residents were able to complete the brief interview for mental status. and the scores range from 0 to 15. but is it measuring what we want to

measure? sure you can get people to answer questions, but is it really measuring what we want to measure? when we compared the bims to a much more complicated gold standard measure that took longer to complete and was pages long, the correlation or the agreement between the two instruments was extremely high. the mds 3.0 agreement with this gold standard measure was 0.91. again, with one being perfect. and for the mds 2.0 cps, wasn't bad -- it was 0.74 - but it was significantly lower than what we got with the new brief interview for mental status. also we found that delirium improved. we got better reliability with the new confusion assessment method. our agreements were very good. and we also

found that when we went in and used this new approach -- doing a structured assessment and then doing the cam -- we actually were getting rates of delirium that were much closer to what we would expect in our nursing home populations. using on the same patients, using a 2.0, it was about a 3% detection rate, and then, with using our with using the cam and the structured assessment, we had about 7% with delirium and 7% with sub delirium, or a total of about 14%, with national rates somewhere around 12 to 16%. the second item, section that has major changes in it is the mood assessment. and what we've done is put in the phq 9. it's a new resident interview that replaces

staff observations for all residents who can report their mood symptoms. there's also in the instrument a staff assessment, and it's called the phq-9-ov for observational version. and these new observational items are going to replace the staff assessment, but again, they're only completed for those residents who cannot participate in the phq-9. and it includes an irritability item. why did we change, get rid of the old mood items? they've been repeatedly shown to have very poor correspondence with independent mood assessments. it doesn't comport with the accepted standard of self-report for identifying mood disorders. and, again, it required time-consuming -- to do it right, you could do it right --

but it required time-consuming systematic observations of all residents across all shifts, which again is very hard to achieve. in our survey of the participating facilities and gold standard nurses, only 22% felt that the mds 2.0 section on mood was easy to complete accurately. the other problem with the old section was that it didn't really help us track change over time in patients, so that we could see how folks were doing as we implemented our various therapies. because what we do when we look at improvement is we look at the dsm-iv criteria for mood disorders and whether those are getting better. so the new items, the phq-9, are based on dsm-iv criteria. their validity has been

very well established in other settings. they've been used in outpatient older adults, hospitals and rehabilitation, including post-stroke patients, and in-home health in addition to younger patients. they've not been tested in nursing homes until we did this as part of the pilot and then again as part of the national study. it's been increasingly being recognized by clinicians, so it's a standardized score that you can actually give to your docs and if they-- or your nurse practitioners. if they're not familiar with it, there are websites out there that provide full education on this instrument. it also allows you to give a threshold definition. in other words, major depression, probable major,

probable minor, and it also allows you to rapidly sum the scores so that you can get a severity score to measure change over time. so how did this work? our feedback results from the folks that actually used it with real nursing home patients, 87% rated this section as improved over 2.0. 88% felt that the interview was much better than observation for capturing the resident's mood. and 86% reported that the items provided new insights into mood. did folks start off this positive about the changes? no. we had a lot of hesitance, a lot of reluctance. but this is what after being taught how to do interviews and actually going out and using it, this is what folks felt about

the interviews. for the staff mood assessment, that phq-9 ov, 90% felt that detection and communication about mood would improve if staff could learn to watch for signs and symptoms, and 72% found that the observational version of the phq-9 was easier to use than the mds 2.0. again, they had excellent reliability between the gold standard nurse and facility nurse for both the mood interview and the staff observations. 86% of residents scheduled for mds assessments were able to complete the interview. so, again, it was completed readily by the overwhelming majority of nursing home residents in real nursing homes. again, we compared it -- again, so they gave you an answer so you go

away. how valid is that? we compared it to an independent gold standard measurement, again much more detailed mood assessment. and we found that it was much higher agreement between the gold standard measure and the phq-9 than there was between -- i don't know how many of you have ever used the geriatric depression scale. it's something that's been out there for a while in geriatrics; it was something that i was trained to use. it was much higher agreement than the gds and much, much higher agreement than the mds 2.0. i don't know if you can see the slide. but the correlation coefficient for the 2.0 was 0.23 compared to 0.83 for the phq-9.

the behavior items. hallucinations and psychosis have been moved from just being stuck in that section j list to putting the definitions right on the form to help make it easier to complete those items. and then the behavior sections, we revised the language to make it clearer. we replace alterability with specific impact questions and replace the terminology "resisting care" with the "rejection of care" concept. and tried to refocus that item in looking at what the residents' goals of care actually are. we have wandering rated separately from the others. why did we make these changes? well, the old behavior item groupings were not consistent with recognized factors for behavior disorders.

only 41% of the nursing facility staff rated the2.0 items as easy to complete accurately. in addition, the old items were viewed as pejorative by consumers and families, and they didn't really get across the concept of unmet need. finally, it was hard to sort of get at this idea of alterability for some assessors because what was alterable for one assessor was impossible to another. and, so we were trying to move away from that concept. the new labels were agreed to by providers and consumers. and we tried to make the groupings match underlying constructs. the new impact items will give us insight we hope into the severity and the potential need for treatments and intervention. what did

folks think about these changes, the folks that actually used it? 90% rated it as easy to complete accurately. 91% prefer the 3.0 behavior item section, and 88% thought that the impact items provided important severity information. again, agreement was excellent between the assessors. now, here you see again a table that looks at the agreement with the gold standard measure. in the second column you see the agreement between the 3.0 and the gold standard measure, and in the final column you see the 2.0. as you see, the 2.0 agreements are worse than chance. the 3.0 agreements were significantly higher. same thing for hallucinations and delusion items. the type of impact on residents varies, which

is what we had hoped to see in doing this. that we're going to get meaningful differentiation and information from these sections, 24% of those residents who had behavioral symptoms, 24% were putting the resident at risk. 33% found that it interfered with care, and 36% were reporting that it interfered with their ability to participate in activities. and this is sort of the continuum that we would have expected in behavioral items. another major change in the mds is the custom and routine section. a new interview replaces the 20 customary routine staff assessment items and the 12 activity assessment items. "how important to the resident" replaces the

check-all that apply approach that's been currently used in the 2.0. and then again, for those residents who are not able to communicate or who cannot self-report, there's a staff assessment of activity and daily preferences. and you're instructed to complete that section to look at how the resident is responding to exposure to those activities. why did we make these changes? well, the old items were not really seen as helping folks with care planning, they were just a checklist that a lot of people were filling out. prior practice, what they were doing, right before they came to the nursing home, may or may not reflect what they really want. it might be more reflective of their ability,

what illnesses they were dealing with, their access to supports, not their preferences. only 30% thought that this section was helping them with care planning. expert panels recommended that we replace this section with an importance rating instead. so we developed a new preference assessment tool which is grounded in concepts of residential care quality and focuses on the resident as being the one who is central to determining what they want. so when you look at this section in the training later on, i think rena is doing this training, you'll see that there's a fairly longer response scale. we had to add some items to this in order to help make it easier for folks to complete. so,

were folks able to complete it? well, we found that 85% of residents scheduled for their mds assessments were able to complete these items. another 11%, i mean -- i'm sorry, another 4%, their families completed it for them, and then for 11% they required the staff observations. how do folks feel about these items? 81% rated the interview as more useful for their care planning activities. 80% found that it changed their impressions of what residents really wanted. this is in each facility. they only did about 30 patients per facility. so for them to be finding these changes in a relatively small number of patients that they did tells us that there's a lot of unrecognized issues going on out there. they were

more likely to report that post acute care residents appreciated being asked. so there were some concerns why would we be asking post acute care residents these things? and what we found was that actually they're more likely to feel comfortable expressing wants and needs than some of our longer stay residents. only 1% felt that some residents who responded didn't understand the items. and we got similar results for the activity items. again, agreement was excellent between the assessors. and then one of the questions we have, obviously when we're dealing with the cognitively impaired patient population, yes, they can answer, but how do we know those answers have meaning? now, obviously there's no

gold standard measure for preferences, so we couldn't look at this compared to a gold standard measure. so what we looked at was do the responses differ significantly between folks who are cognitively intact and folks who are cognitively impaired who are able to complete the interview. and this slide just says basically "no". the median, the mean answers were similar for all of the population groups that we looked at. finally, a big section with a lot of changes is the pain assessment. as tom talked about, we added treatment items. the resident interview replaces staff observations for residents who can report pain, and the section tries to capture

the effect of the pain on their actual function. so that we're not just looking at severity scales, but we're looking at what effect it's having on that resident. and then again, we know not everyone's going to be able to complete this section. so there's a staff assessment where now an observational checklist of what behaviors you're looking for is included. why did we change the old items? well, it's been, again, repeatedly shown by investigators other than us that the old pain items did not really comport with what patients were self-reporting in the nursing home. and that it required, again, to do it right, time-consuming, systematic observations across all shifts, which wasn't

possible. and pain, in particular, there was a detection bias that sort of penalized more vigilant facilities. what do i mean by that? i mean that those facilities that were out there doing those systematic observations and asking those pain scales regularly might have higher levels of pain report than the folks that weren't out there systematically looking for it and asking for those pain levels repeatedly. and then finally, providers and consumers were all really frustrated that this section only really addressed very limited parts of folks' pain. and that a three-point severity scale wasn't sufficient to match most commonly used pain scales. providers wanted an item to capture therapy, and

self-report, again, is the gold standard for pain assessment. we tested whether or not folks could recall their pain over five days. this is a common question that comes up a lot in looking at these items. we went every day in the morning for five days and asked about their pain and their pain symptoms. and then, at the end of the third day, a different assessor went in, and at the end of the fifth day, a different assessor went in. and we looked at their ability to recall pain compared to those daily pain reports for the last day. and what we found was that folks who had moderate severe pain or frequent pain were able to recall their pain at the three-day and at the five-day look-back. we did miss a

few folks at the five-day, who had had only mild pain on one occasion. they might or might not report on that fifth day about their pain. but all other folks were actually remembering. they didn't forget that pain, and they did report it. with pain -- one of the other things the challenges that we faced was whether to use a 0 to 10 scale or verbal descriptor scale. as many of you know in hospitals, a lot of people are using the 0 to 10 scale, and it's something that providers are familiar with. and then there's also the verbal descriptor -- mild, moderate, severe -- kind of scale. and we were trying -- we were trying to decide how to pick. and we ended up being able to put both of

them on the instrument. we found that nursing home residents could complete both and that we could do a crosswalk for cms so that they could compare the two different types of pain assessments. how do folks feel about it? well, people liked it. 88% rated the pain items as improved. 94% felt that it could inform their care plans, and even there again, during the small sample per facility, 85% found that their pain assessment, their new 3.0 pain assessment gave them new insights into what was going on with that resident's pain. and we were a little surprised by this because so many facilities had already started to implement the 0 to 10 scale or verbal descriptor scale or a thermometer or faces

scale, so we were surprised there was this much new information gleaned from this assessment approach. 90% felt that all residents who responded understood, and only 3% felt that that was not the case. again, there was excellent reliability on the pain items. and 87% of non-comatose residents were able to complete the pain interviews. we found higher rates of pain with the 3.0 assessment, going directly to the resident, than we did with the 2.0 assessment. we also looked at pain recall 24 hours later, a different assessor going in seeing if they got similar responses from the patients, and there was very high agreement between those assessments. unfortunately, there wasn't much agreement with the 2 .0. the

staff assessment of pain -- basically 43% of the residents were observed to have one or more of the symptoms. and what you just see here is the distribution of the symptoms that were observed. so, other sections with important changes. tom really highlighted some of these. in the balance section, we've refocused on movement and transitions, not just balance sitting still, but balance during the times when folks are really at risk to fall. bowel and bladder, we no longer are going to rate a catheter as continent. and, we've tried to improve the toileting program items so that you can say, "hey, look, i tried toileting, it didn't help my resident, and that's

why they're not on the toileting program right now." the falls section, we introduced types of injuries into the falls section. dr. joe elslander led the research on that section. the swallowing item -- now you've got a checklist of observable signs and symptoms that will clue you in that this patient -- resident might have a swallowing disorder. so it's not just saying "do they or don't they", but rather are you observing these signs and symptoms. in the weight loss section, you can now code that the physician prescribed a weight loss diet. we're having increasing numbers of bariatric patients in our facilities who may be on a weight loss diet. section m, the pressure ulcers as

tom alluded to, there are a lot of changes. you're going to have a great presentation about this later on, but these follow now in the npuap guidelines. and we've eliminated the concept of reverse staging, that is as the wound heals up that you would change the staging on the wound. you now are going to be reporting length and width, and we have added an ability for you to indicate that that pressure ulcer was present on admission when that resident came in. section o return to community, you've got a lot coming up on that. so overall, i've told you some changes in the different sections, but overall how did staff feel about the changes. 85% felt that overall, the new 3.0 was going to help

them do a better job of recognizing problems. 84% thought that the interview items helped improve their knowledge of the residents. and 89% rated the 3.0 as providing a more accurate report of characteristics than the 2.0. now, looking at this you'll see, that we're sort of hanging out in the 80s. one of the things that happened was that one of our gold standard teams was not convinced that the interview items made sense to them. after the training, they still weren't convinced. they were very skeptical. and they went back to their state, and they trained in their state. this, the 15% that you're seeing disagreement from, almost always, were from that same state. so your attitudes about whether or not these interview items

are going to work are contagious for your facilities, your facility's staff and what happens when you go back into the field and start actually using them. so it's really important that you as sort of the spokespersons for this in your institutions try to convey the potential that these items have for improving that time that you're spending. it took less time in the assessment. the average time to complete the 2.0 was 112 minutes. the average time for the 3.0 was 62 minutes, which was a full assessment, which was 45% less time than it took to fill out the 2.0 in exactly the same residents. so, in summary, the feedback -- the changes are based on

feedback, input from experts, and advancements in assessment science that we've gone over today and actual testing of how they acted in nursing home residents. the national testing showed that with increased resident voice, we can actually improve the accuracy and reliability as well as the clinical utility of the assessment. and finally, facility staff and nurses that used it, liked it. it was more clinically relevant for them. it increased their knowledge about their residents. now, some of you sort of grumbled when you saw the 45%. you said, "gee that form's so much longer, how in the world can it be 45% shorter?" we actually looked at the forms, and we said why is it that a lot of times in our

research studies, our staff do a whole lot better job of collecting data than when we give it to people to actually use in real life. it's because somehow we've gotten into this concept in real life that if you can squish it into a smaller form, somehow having to read .8 font is easier than reading larger font and that having it on one page makes it easier to complete than having it on two. but human factors tell us that that's not the case. people need to be able to read it, and the form needs to support them going through it logically. so one of the things that we did was we put important definitions on the form. if we knew it was an item that's been catching people up for the last 15 years, we

tried to put that definition on the form to help make it easier to complete because it's hard to go find that instruction. we should go find the instruction manual, but it's hard to go find the instruction manual. so we put that instruction on there. that takes space. we moved some items out of checklists, and i'll show you an example of that in a minute. again, to bring them more clear, these were ones that the folks were miscoding a lot. we used a much larger font. maybe it's because my eyes are aging, i don't know, but we wanted it to be readable and legible to people as they were using it. and then we put logical breaks with fewer items on a page. the item type, we tried to get a

consistent format, and we'll go over that in just a second. if we actually went in and reduced the font size and used the double columns and did it, made it look like the old 2.0 form, it's exactly the same number of pages even though now for many sections, we have two alternative items, which should take up more space so, yes, it's more pages but that was an intentional thing to save you time, not to make it take more time. so here's a comparison. remember earlier i talked about the hallucinations and delusion items having better agreement with the gold standard measure. how did we accomplish that? in this section, all we did was take hallucinations and

delusions out of the checklist, put them in their own separate sections, and put the definitions on the form. so you see there, the hallucinations and delusion items are little, tiny little boxes on the old 2.0, you can't barely find them. and on the 3.0, it takes up a lot more room, but it improved the ability of the assessors to accurately complete this item. so there's some patterns you're going to see in the form, and i'm just going to review those real quickly. our gold standard nurses in their training said to us, it would help us a lot if we had a symbol that showed us which pages had interviews on them. so there's a little ear that you'll see that indicates that you're looking at a page with an

interview item. we also put in something to help you with a skip patterns because we know skip patterns can be a little tricky for folks when they're trying to speed through a form. and you'll see something that says, "should the interview be conducted?"and there's a "no" answer because the resident is rarely or never understood. and then it tells you what item to go to to do the staff assessment. and there's a "yes" item that tells you to go on to what to assess. our check all that applies has this sort of standardized approach now. so that when you see it, it's clearer and more legible that you're doing a check all that apply, and there's now a z answer that was actually recommended by stepwyse

to include in the form for a "none of the above". that is standard across the checklist sections. a single item or a short list, you're going to see a little box with an "enter code" right above it. with multiple items grouped together with an enter code, we try to have the coding right next to it and then an enter code with an arrow right down below it. as i said, z is none of the above. if it's an interview item, 9 is going to be "unable to answer". the dash -- item not assessed. it most often occurs when the discharge, when the patient's, resident's been discharged before their assessment's been completed. some items won't allow it, and there will be a lot more discussion about that in detail.

and the form, you'll see physician, the term physician, used and in the instruction manual, but i just want to be sure i emphasize that that includes nurse practitioners, physician's assistant, and any clinical nurse specialist according to state law that's allowed to do that type of diagnostic. so i think the questions will actually be at the end of the day for folks, but i really appreciate your attention. we're really excited about the possibility of these items helping you.

No comments:

Post a Comment