Measurement of Veteran's Participation Highlighting key issues in defining participation and the process for developing and testing a new participation measure. Presentation Video
Clinical Practice Research > Outcomes Research    >   Presentation Video
Presentation Video  |   November 01, 2012
Measurement of Veteran's Participation
Author Affiliations & Notes
  • Linda Resnik
    Brown University, Providence VA Medical Center
  • Presented at the ASHA Convention (November 2012).
    Presented at the ASHA Convention (November 2012). ×
  • Funding for the research discussed in this presentation was supported by VA HSR&D VA TRP-04-1, VA RI Foundation 2005-2665, VA HSR&D VA SDR-07-327, VA HSR&D DHI-144-07, Boston University R-24 (Jette).
    Funding for the research discussed in this presentation was supported by VA HSR&D VA TRP-04-1, VA RI Foundation 2005-2665, VA HSR&D VA SDR-07-327, VA HSR&D DHI-144-07, Boston University R-24 (Jette).×
Article Information
Research Issues, Methods & Evidence-Based Practice / Attention, Memory & Executive Functions / Traumatic Brain Injury / Clinical Practice Research / Outcomes Research
Presentation Video   |   November 01, 2012
Measurement of Veteran's Participation
CREd Library, November 2012, doi:10.1044/cred-pvd-c12001
CREd Library, November 2012, doi:10.1044/cred-pvd-c12001

The following is a transcript of the presentation video, edited for clarity.

I hope today to talk about the construct of participation, discuss some of the key measurement issues in addressing participation, provide some background information on why we needed a measure of participation for veterans, and then describe the process of development and testing of the CRIS measure and the CRIS-CAT, the computer adaptive test version of the measure. I've been working in this area since 2003, so we have a lot of research. And, hopefully, I'll walk you through it rather quickly.
This line of work came out of the news and the recognized need for veterans returning from Iraq and Afghanistan. We call that Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF). And the returning veterans, it was very soon apparent that there was a high prevalence of traumatic brain injury, posttraumatic stress disorder, depression, and polytraumatic injuries.
And we knew from the Vietnam veteran experience that demobilization from combat and returning home can be very challenging, articularly so if you have had a co-occurring physical injury.
My background is as a physical therapist, and I have always believed that the ultimate goal of rehabilitation is to return people to their life role functions. Community reintegration is the return of individuals to their age, gender and culturally appropriate roles at as near as possible as their pre-injury level of participation. So this is the outcome that is most valued by our patients. By their families. And by society.
ICF Model of Functioning, Disability and Health
How do we assess community reintegration? This was our challenge. We looked to the ICF Model of Functioning, Disability and Health to conceptualize and operationalize the idea of participation. And I'm sure most of are you familiar with this model, so I'm not going to walk through the model except to say that participation and that aspect of the model is what we're interested in in defining community reintegration.
Participation according to the ICF is involvement in a life situation. And participation restrictions, using the ICF taxonomy, are problems an individual may experience in involvement in life situations.
If you're familiar with the ICF, you know that, even though that the conceptual model shows nice distinct domains of activities and participation. In reality, the ICF domains of activities and participation share one common taxonomy. There are nine chapters of activities and participation. And these include everything from learning and applying knowledge; general tasks and demands; communication; mobility; domestic life; self-care; interpersonal relationships; major life areas, which includes employment and being a student; and community, social and civic life. And there's the taxonomy that describes within each of the chapters are activities and participation.
Measuring Participation
So our goal was to develop a new participation measure for veterans. We developed it, and I'll tell you about it. It was called the Community Reintegration of Service Members measure, the CRIS. But in developing this measure, we had to grapple with many of the issues in conceptualizing and measuring participation that have since been well-described in the literature. And as I said, we started this work in 2003. And we had to learn some lessons and come to our conclusions, and there's since been quite a bit of discussion about some of these issues.
So the ICF, although they share a similar taxonomy, they propose several different methods for distinguishing between activities and participation. So there's annex of the ICF, Annex 3 that gives options for how we can tell the difference between items that are activities and items that are participation. So the options are to exclusively designate some of the chapters, some of the domains as activities and others as a participation. And Whiteneck and Dijkers (2009) advocate this approach. They say the last three chapters, those are participation. And all the other chapters, those are activities. Another option is to designate some domains as activities and others as participation, but there could be some partial overlap. The third option is to designate all the broad categories as participation and then all the detailed taxonomy as activities. And then the fourth option is to consider all the codes as both activities and participation, depending on their context. In this last option, complex functional tasks would be considered participation. And simple tasks are considered activities.
Who Defines Participation?
Another issue that we grappled with is who gets to define participation? Each chapter has multiple sub-levels. And if you're familiar with the ICF taxonomy, it's quite extensive. And there are hundreds of categories that describe activities and participation. So it's important for the population to identify which specific elements are relevant to measure and would be most pertinent clinically. And that we would target our treatment towards, and we would expect to want to change with treatment.
We recognize that the important elements could very well vary by condition and population. And there's been a lot of work done in the development of ICF core sets for common conditions. And some of you may be familiar with that work often done through large consensus panels, where clinicians and stakeholders identify what are the relevant aspects of health and function that should be measured with this condition? And then they come to some agreement. These are the things that should be in the core set.
As I mentioned earlier, there's a strong advocacy that just three areas of activities and participation be considered in the taxonomy: social participation and relationships, the interpersonal relationships chapter; productivity and economic participation, which is the major life areas; and leisure and recreational participation, which is community, social and civic.
We took a different approach than this and recognized that participation is involvement in life situation. That participation can and does occur at the person-level, not always in relationship to other people. So we looked at the idea of adult role functioning. And some of the things that adults might do in their normal roles that could be alone and not in the context of other people such as engaging in hobbies, planning and cooking a meal, some complicated activities that are, obviously in my mind, more than simple tasks. Managing daily schedules. Taking care of health. Managing stress. Maintaining hygiene and appearance. Planning a trip. Following complex directions. Even driving and obeying the rules of road. These are more complex items that we considered participation. So we held the view that participation was an involvement in a life situation, and it did not need to happen in relationship with other people. It could sometimes happen in adult role functions that were done individually.
This really is a view that contrasts with the idea that role performance is, happens at the social level. And social roles are, by definition, done by other people. This is also in contrast to the approach that the PROMIS measure used in their measure of social health. They weren't developing a participation measure. They were developing a social health measure, social function. And by definition it was involvement in and satisfaction with usual social roles in life situation and activities.
So that was one issue that we grappled with. How are we going to distinguish between activities and participation? And which approach would we take? So we took that fourth approach from the Annex, considering all activities of participation taxonomy as potentially either, depending on the context.
What Aspects of Participation Would We Measure?
So the other issue we had to grapple with is what aspects of participation would we measure? There're different ways of looking at participation. Different kinds of things to assess. For example, one could assess the performance of roles. So the degree to which you take part in a role, say the frequency. These are common survey questions that you might have. How often do you go to the movies? How often do you go out to dinner with others? How often do you get together with friends?
Another possibility is to ask people to report the limitation they have. Or their difficulty. Or their restriction in performing their role functions. So these are questions that would be like, how much difficulty do you have in getting around in the community? Or how restricted are you or how much limitation do you have? So these are limitation or difficulty restriction.
And a very different approach to measurement is asking people how satisfied they are with their participation in their roles. So how satisfied are you with the amount that you get together with friends? And that could really be quite different for different people. Some people might say, I get together with friends once a week. And that's very satisfactory. Other people might say, I get together with friends once a week. And that's really not enough. I'm not satisfied.
And then there are some measures of participation that look at the importance of an item to an individual. So how important is this aspect of role functioning? There are other measures that also look at autonomy. How much independence do you have? And there's also a measure that looks at your sense of enfranchisement or how much you belong in certain situations.
So there's very different approaches to measurement of participation. Any kind of participation measure needs to take a stand on which dimensions to measure.
Other Issues in Measurement
Other issues in measurement of a broad construct like participation is whether it's a unidimensional construct or if it's a multidimensional construct. And this was sort of controversial. Because if it's a unidimensional construct, then you can score all these participation elements on a single scale. If it's multidimensional, then we might need separate scales, taking a more clinimetric approach. And so that needs to be shown in the data so you can understand how you can score a measure.
Steps in Developing the CRIS Measure
So now let me tell you a little bit about the development of the CRIS measure. I'll just quickly walk you through these, and then I'll tell you a little bit more about each of the steps.
We conducted formative research with injured veterans, their family members, and their caregivers. And with clinicians. We did a large review of existing measures. And we developed the initial item set.
We did cognitive based testing of the item set and revised it.
We developed a fixed form measure. It was always our intention to develop a computer adaptive test. We had many, many items. So those were tested in a large field study with a one year follow-up.
We also then tested the fixed form with a severely injured population.
We looked at how the test could be administered. Could it be interview administered? Could it be telephone administered? And were there differences in responses with the mode of administration?
We also developed an audio-assisted version of the CAT software so it could be self-administered for people who have difficulty while reading and with attention.
We've also tested the measure in a mild TBI population. So I won't tell you about all of those things. But generally this is the scope of work that's been done over the last eight years.
Formative Research
So we used the ICF framework to understand the challenges in community integration of injured service members. We designed and tested the new measure, and we called it the CRIS.
When we did the formative research, we used the ICF taxonomy to identify the participation items or the content that should be in the scale. We took, as I mentioned, the fourth approach where we considered all items either as possible activities or participation, depending on their complexity. And we decided that we would include objective, subjective, and satisfaction aspects of participation. So we have three separate scales that measure those three different dimensions.
We considered this as a population-specific measure for veterans. We have a lot of returning service members. There's over 2 million now who've returned from combat deployment, sometimes multiple deployments. There was no measure that addressed the key issues of returning veterans.
In the formative research we conducted in-depth interviews. And interviews with healthcare providers.
We took all the transcripts from the interviews, and two coders independently went through them. And we used the ICF taxonomy to identify the challenges that the people were talking about, and to code them according to the ICF. And then we further classified them as deep down into the ICF taxonomy as we could. The concerns identified then were cataloged.
So when trying to find out what the content of this measure should include, we have a lot of options.
We also reviewed over 20 other measures and every item from those measures to look at what currently existed. So we coded all these other measures.
And we found that existing measures didn't really have the content that we needed for this population. For example, we found that most of them lacked questions about attention, concentration, coping, stress management, driving, alcohol and drug use, and social isolation. I'm just going to give you a couple of quick examples of the data to give you a flavor for some of these issues of maintaining a job.
So these are some rich text examples from some of our qualitative interviews.
And here's the girlfriend of an injured service member talking about the challenge her partner had in learning and applying knowledge. She said, "He's very unfocused, very unfocused. For him to read that document you just gave me on Monday," which was the informed consent form, "that would stress him out. That is going to extremely stress him out to read those four pages. Yeah, I can prepare him and say 'They're going to give you a four-page document that you need to sign,' explain to him what it is. He will lose half of what I'm saying to him by the time I'm done." So we heard a lot of people talking about difficulty reading complex materials. And problems with attention and focus.
We had a lot of our interviewees talking about difficulty they had with driving. And driving is as aspect of mobility. I hadn't necessarily been thinking about transportation and transportation restrictions. But a lot of these folks had been gunners or tank drivers, and they had some serious adjustment issues returning home to driving.
So here's one of our injured service members saying, "I seen items. It was just regular garbage. It seemed like something that was going to possibly cause harm to somebody. And I felt the need to just get away from it because, when you get anxiety, you get, like, pressure in your chest. And your throat gets all choked up. You have a hard time breathing. I seen it coming. It's like you hold onto the steering wheel real hard, like I'm waiting for another bomb to go off or something. And then I'd just, I didn't even look to see if anybody was near me. I just rammed off to the side and came around it just to get away from it. And you step on the gas and just speed right by it." It's not a surprise then that several years later the reports started to come out about the increased motor vehicle accidents for service members who had returned. But there are many restrictions in mobility.
There are a lot of discussions about tolerance and interpersonal relationships and the impact of combat experiences on personal relationships. Here's one of our interviewees saying, "I just have low tolerance for stupid stuff. God, about a month ago I was in McDonald's with a friend of mine. And the lady in front of us was just taking forever. And I just, I'm like, 'Christ, lady, it's the same menu in every McDonalds all over the country. Like, order something or get out of the way.' And everybody, it was like, you know, everybody in the restaurant just kind of looked at me. And she moved out of the way, and we ordered and that was that."
And I have other examples, but for the sake of time, I'm not going to go through them. Here's an example of someone, you know, punching out someone because he was annoying him in the mall. And people talking about their desire to spend time alone and socially isolate themselves. And so I want, just to give you a flavor for the data and the kind of items that we felt were necessary to add.
So the formative research, if you're interested. And it was written up in two papers. One was on using the ICF taxonomy to understand the challenges of service members. And the other one, we did a large review of participation measures or measures that covered at least two chapters related to activities and participation.
Development of the CRIS Item Set
So we decided that the CRIS would assess three concepts. Perceived limitation in participation. Frequency and amount of participation. And satisfaction with participation. So, basically, we can say we have objective, subjective, and satisfaction elements.
We decided, in wording the questions, that all questions would address a current life situation. We would not ask people to compare themselves to life before they were injured or before deployment. Or to those people who hadn't been deployed. There're different strategies in writing questions. And we just made these decision rules about our questions. We would ask people not to attribute their state to any injury or illness. So just to report on how they functioned or perceived their function within the last two weeks.
We developed a fixed form which is 150 questions. And it in total takes 30 to 35 minutes to administer. So we did some initial testing of the validity of the scale.
We did a small pilot of 50 veterans. We looked at preliminary IRT to examine the dimensionality. We looked at the differences in the scores between people who were employed. Those with and without PTSD. Those with and without depression. And we showed that the scores of the CRIS were different for people in those categories. That gave us good hopefulness.
We did find that some items didn't seem to fit and needed to have revision. But there was good internal consistency.
As I mentioned, veterans who were working had better scores as compared to those retired or not working. Veterans with PSTD has worse scores as compared to veterans without. Veterans with depression had lower scores on the satisfaction with participation scale. So we thought these results showed us that the CRIS had good construct validity.
So we revised the items that we felt misfit. And we cognitively tested new items.
And then we conducted a second pilot study of 75 veterans. And we gave the CRIS measure twice within one week because we wanted to understand what its test-retest reliability would be.
And, again, we repeated the same similar analysis. We looked at the dimensionality. The internal consistency. This time we added test-retest reliability. Then we looked at differences between groups.
And we found, again, the scales appeared to be unidimensional. They had excellent internal consistency. And they had excellent reliability.
Veterans who were working had better scores compared to those that weren't. Veterans with PTSD scores, had worse scores as compared to those without PTSD. This time we looked at substance abuse history. And veterans with substance abuse history had worse scores as compared to veterans without. And veterans with any mental illness had worse scores as compared to veterans without.
So overall, the results of those two pilot studies demonstrated good structural validity. Content and construct validity. And excellent test-retest reliability. And we wrote up that development of the CRIS in JRRD. So this was our pilot work really that allowed us to get the merit review funding to develop the CAT. So that's where we are.
Examples of CRIS Questions
I just wanted to give you an example of how some of the CRIS items were framed. So this is how we coded the questions. Every one of our interviews, we had content area coded by the ICF categories. And the participation, activities and participation taxonomy are D codes. So under communication this is the code d350 conversation. And that's the definition to the right, but in the taxonomy of that D code. So we did not write these definitions. These are from the taxonomy.
We realized we needed questions regarding conversation from our formative research. And so these are examples of questions for each of the three scales that touch upon this area of conversation.
Because we had the knowledge that there would be a fair amount of our veterans who had mild TBI, we were not certain whether or not that would always be diagnosed. We were concerned that people may not have the self-awareness necessarily to answer questions about their behavior. We had some advice to word some questions about how others felt about you. So this is done in some other tests. But the first question, perceived limitation, “Others felt that I interrupted inappropriately when we were talking.” Extent of participation, “When speaking with others, how often did you interrupt them inappropriately?” And satisfaction with participation, “How satisfied were you with the way that you participated in conversations?” So these are examples of questions that touch on that category.
The Computer Adaptive Test
Our fixed form measure took between 30 and 35 minutes to administer, which we felt was lengthy. We wanted to develop a computer adaptive test version that would be much briefer to administer. And then assess its psychometric properties. And then use the measure to compare and contrast community reintegration of three different groups of veterans.
This study had working veterans with no mental health problems. Homeless veterans. And OEF/OIF veterans. And we expected they would span a wide range of community reintegration.
We had two years of data collection, 2008 to 2010. And then for the new combat veterans, we had a one-year cohort follow-up study where we administered the measure. And we also pulled medical record data so that we could look at predictability of the CRIS. So we looked at emergency room use and so on.
We also administered some concurrent measures, such as the SF-36 quality of life scales, a couple of measures from the CHART, and, as I mentioned, we extracted diagnostic and health care utilization data from the VA databases and then linked them by social security number to our study data.
So our full item set that we tested had over 300 items. And the questions were organized into three scales.
This was our field sample: 69 veterans with “good community integration”; 99 veterans with “poor community integration”; and 332 OIF/OEF veterans. Actually, group A, the 69 veterans with good community reintegration were the hardest to recruit because they generally had to be recruited from the community and not from within the VA. Although we tried very hard to get people who totally fit these categories when we looked at the data later, there were 17 people who did not fit these categories.
We conducted exploratory factor analysis and confirmatory factor analysis on the item sets. And then refined the item sets and did Rasch models. Looked at the fit of the item-person map and so on.
The resulting scales, out of those 300 items, were 77 items in the extent of participation scale. 144 in the perceived limitation scale. And 86 in the satisfaction scale.
We did data simulations on these 517 subjects to see how many items would need to be administered to get precise scores. And we found that we could get good precision with 20 items on extent. 16 items on perceived. And 14 items on satisfaction scale. We later did an administrative study, and we said that took on average 10 minutes to administer.
This just shows you how we ordered the item hierarchies from difficulty level. And this shows the items at the top were the most challenging. The items at the bottom were the easiest to endorse.
In summary, the CRIS-CAT is a population-specific measure that we developed for veterans. The scales had good construct, concurrent and predictability. I did not present to you the ER data. But, as I mentioned earlier, we found that CRIS scores at baseline could predict the odds of using emergency room. And we had three unidimensional scales.
Conclusion and Future Research Needs
So what does this all mean? And what does this mean for the development of other participation measures?
As I mentioned to you, it's somewhat controversial in the field as to whether participation is a unidimensional construct. Or whether it's a multidimensional construct. But our data suggested that participation for veterans with the item content that we used was unidimensional and that psychometric approach to measurement was appropriate.
It also suggested that the Annex was an appropriate approach for differentiating between activities and participation.
This was very population-specific. And I think further research would really need to be done to confirm whether this approach can be used with other populations.
One thing that can be done is that we could confirm the conceptualization of participation by reproducing this approach to look at other population-specific measures.
We kept objective, subjective, and satisfaction dimensions separately on three separate scales. That's also something that could be looked at, whether they could be separated or blended.
Although we developed this for veterans, I strongly believe that it actually is probably appropriate for people who've sustained traumatic injuries and been in other traumatic situations. And for people with psychiatric illness. Because a lot of our sample had psychiatric illness. We have not yet validated this measure in those populations, but I'm very interested in doing so.
So future research, we really need to do large cross-sectional studies to get normative values for the CRIS-CAT. And then longitudinal studies to look at the stability of the CRIS-CAT scales over time. And then what I'm really interested in is looking at the responsiveness of the measure to change after intervention that's designed to improve participation. That really is necessary for us to know if this measure is responsive, and to do some head to head comparison with other measures to see what measures respond to change. And what interventions are effective at improving participation.
Also, there's a great interest in looking at the factors that are related to participation. Certainly personal factors and the environment. Social support. These things that we strongly think are related. And other studies to understand how determinants of participation vary by age and sex and other factors.
Questions and Discussion

Question: Have you looked at the relationship between the extent, perceived limitation, and satisfaction sub-scales?

Our work shows that the extent and the perceived limitations are fairly highly correlated. Satisfaction is less correlated. That's not always true for participation measures or measures that cover the area of participation. But I think it would probably need to be looked at in a population-specific way. Personally, I think that there might be very similar scores between extent and perceived.
And one doesn't have to administer all three CRIS scales. You can choose. The software that's developed, the CRIS-CAT software which we developed is freeware. I'm happy to share it with anyone. But you get to choose in the beginning, do you want to administer all three? Or just a single one? Or two of the three.

Question: You had a couple initial pilot samples, and I felt like it also gave you a window into the feel of the data as you're building measure. Did you do that methodologically by design? Or did that mirror the way that the funding unfolded for the project? Or what drove that?

Well, I think both of those things. First, we wouldn't want to go forward with a large item set unless we had done some preliminary pilot testing and revision of the item set. Because there's a lot of tweaking that needs to be done to really get the item set and each of the questions and make sure they're well-understood. And that they are being answered as we expect people to be answering them.
The initial funding came through a TRP VA health services mechanism. And then we had some pilot money. So it came in several stages. But it also I think corresponded to probably good measure design. Because I don't think it would make sense to move forward with the testing of a large item set and a large sample like that unless we had data to suggest that it was going to be robust and measure what we think it intended to measure.

Question: Is there anything in the instrument that really would make it just unique to veterans and to service? It seems like it would be applicable, as you kind of alluded to in ideas for future research, to a broader population.

There are very few questions that are veteran-specific questions. I think there is a question about, “How often do you get together with friends who are veterans?” And there's a question of “How often do you get together with friends who are non-veterans?” And the difficulty level of those was quite surprising. It's much easier for veterans to get together with friends who are veterans than to get together with friends who are non-veterans. That's just how it worked out. But there were very few questions like that.
On the CAT there are some screens set up so that people without children are not asked the questions pertinent to being a parent. People who are not working are not asked questions about that. So it could easily be modified so those very few questions that are veteran-specific, that use the word veterans are not asked to people if they're not a veteran.
There was no item that's particularly critical on a CAT. Because you get a whole array of item difficulties. But I did notice that some of the questions for veterans about socializing were different. More difficult to socialize with people outside of the veteran experience. Those are the ones that I think of right now. There were not very many questions like that. But we, in our interviews with people, we knew that this was a phenomena that was true, is that people seek out others. And they don't want to connect with friends from before who did not share the similar experience. And may isolate themselves from people and so on. So there were very few items like that, but they could easily be filtered out.
But I think, before we use a measure like that in a different population, you just, we would need to establish the content validity. I had hoped to do that, but we haven't yet had a chance to do that work.

Question: How common is the ICF within the VA system and using that model? Do other measurement instruments use the ICF model within the VA system?

No. When we began this, it was in 2003 and 2004 that I began the, you know, the proposals for this formative work. And the ICF, remember, was published in 2001. And is still really being accepted, particularly in this country. In other countries I think it's much more widely accepted.
It isn't really being used. But, as we could see with the CMS rules for functional limitation reporting, the ICF taxonomy is starting to get used and integrated. But I think there're very few measures that were developed with that taxonomy. Obviously, the PROMIS measures are not using that taxonomy. There are other constructs that we generally have been interested in measuring for health and rehab.

Question: I study children with language disorders and the effects of language treatment. And yet the concept of reintegration I think could be very relevant to my area. Especially if it's translated and has such decisions as what are the, what's the full slate of factors we should consider when we're thinking of releasing a child from therapy? Or when we're thinking of recommending that a child move from resource help to full-time in a classroom? I would imagine there are things over and beyond an improvement in language skills that we should consider. Such as improvement in self-confidence. The child's willingness to socially integrate with others and so on.

My question is, if someone were to, are there any general tips that you might offer for people who might be considering developing or refining a measure of reintegration applied to an area such as the one I just described?

I think the formative research is really crucial. And figuring out who needs to be in the formative research to help you understand the challenges. And to have a broad enough group of stakeholders to get a very clear picture of what are the important aspects of integration for children that you'd want to measure. You really can't have good content validity without really good formative research to help inform you what should be in the measure.
And then taking a look around at what other measures exist. And then writing items that address that content area. I got very familiar with the ICF taxonomy and its pros and cons. But there was a lot of international effort in terms of defining each one of those subchapters and what they mean. So there's a good starting point.
For children, I'm not sure if people feel that it is all encompassing. I don't work in the area of pediatrics. But there might be some thought that it might need to be expanded for children.
Audience Comment: There is an ICF for children and youth. It was published 2007. And it has a lot more developmental codes. Like, for language there's a code called developing syntax. Responding to voice. Things that are not in the ICF but it could be of use to you. Because the next step is they're going to combine the two. Because it turns out that those ICF-CY developmental steps are very important for both dementia and head trauma. Because it could have an intermediary step that the current ones do not have. So the plan is they're going to be combined in a few years.
Very interesting. Thank you. I knew that there was some controversy in that area. Because anything that wasn't strictly coded in the taxonomy, there's an out that's called other. And you could put it under this other category. So there were some of the issues that we had that were just other because there was no taxonomy and no real description of the kinds of functions that they were.

Question: I'm wondering if there were some items that you were worried about their validity?

I was surprised to see in your ranking of item difficulty that thinking logically and critically kind of fell middle range. It raises issues of your self-awareness, certainly. But also they're very subjective feelings. How do you enjoy something is different than how logically and critically do you think now? I was surprised by its placement.

This is always a challenge with patient-reported outcomes. Even though I'm saying it's extended participation, we're having self-reported patient perspectives on how often they encounter a situation. Or how much limitation they have.
I showed you the ordering of items. But with so many different chapters, where things fell out was sort of interesting. And the easiest thing to endorse was personal cleanliness. And the hardest thing to endorse is recreation, participation, and socializing. And then a lot of things were in the middle.
The thing to remember about a CAT or any measure like this, is that the score is not based on one single item. When you only have single item measures, then you can have issues around reliability around them. But with multiple items, then you can narrow down the score so you can have more confidence that you're measuring this latent construct, which is participation.
But we did not do any concurrent validation of particular items against, say, like a neuro-psych evaluation or something to say what people endorse for how much limitation they have thinking clearly and logically, and how did that really correspond to real life objective function. But generally with physical function, which is what I know much more about, the correlation between self-reported physical function and performance-based physical function is moderate at best. People’s perceptions are not the same as an outside objective measurement. They're usually moderately correlated.
Audience Comment: I would just add that that's an interesting challenge in communication. Because there's been some work to suggest that, after someone goes through aphasia treatment, for example, both they and their spouse or loved one may have developed a greater awareness of their communication challenges. So, in fact, it may look like there has been a decline in progress. But, in fact, it's just that they're more aware of the problem.
We didn't look at that. But certainly this idea of response bias because you've either changed somehow through treatment or your values have changed, that's certainly an issue in all kinds of patient-reported outcomes. That's a whole other area. A response shift, I should call it. Response shift rather than response bias.

Question: I was really interested that you had a category of responses about how satisfied people were. When our team were working on our communicative participation items, we really wanted to ask people “how satisfied are you?” And, through our cognitive interviews with participants, our participants told us not to use that because it really didn't capture their experiences. They wanted a more negative wording that reflected the problems they were having, not how satisfied they were. So we wanted to use satisfied but had to give it up.

I was curious if the satisfaction wording seemed to work well for you? If you got any kind of feedback about that?

Well, we did cognitively test all of our scales and the anchors and the words for each of the response categories, for the scales.
We originally had been using the terrible/delighted scale for satisfaction. And the veterans did not like the anchor words on that scale, which were terrible and delighted. They didn't think they would ever use the word delighted. And they didn't want to endorse a response that was delighted. So we changed the response categories so they would be more acceptable.
A lot of the questions might have been, say for communication. We might have asked, how satisfied are you with your ability to make yourself clearly understood? We did not have any complaints about that. But, again, when your measure is population-specific and you're getting that kind of feedback from the people you are testing, it's worth taking a look at. We did not encounter that.
Resnik, L. J. & Allen, S. M. (2006). Using international classification of functioning, disability and health to understand challenges in community reintegration of injured veterans. Journal of Rehabilitation Research and Development, 44(7), 991–1006 [Article]
Resnik, L. J. & Allen, S. M. (2006). Using international classification of functioning, disability and health to understand challenges in community reintegration of injured veterans. Journal of Rehabilitation Research and Development, 44(7), 991–1006 [Article] ×
Resnik, L., Borgia, M., Ni, P., Pirraglia, P. A. & Jette, A. (2012). Reliability, validity and administrative burden of the community reintegration of injured service members computer adaptive test (CRIS-CAT). BMC Medical Research Methodology, 12(1), 145 [Article] [PubMed]
Resnik, L., Borgia, M., Ni, P., Pirraglia, P. A. & Jette, A. (2012). Reliability, validity and administrative burden of the community reintegration of injured service members computer adaptive test (CRIS-CAT). BMC Medical Research Methodology, 12(1), 145 [Article] [PubMed]×
Resnik, L., Feng, T., Pensheng, N. & Jette, A. (2012). A computer adaptive test to measure community reintegration of veterans. Journal of Rehabilitation Research and Development, 49(4), 557–566 [Article] [PubMed]
Resnik, L., Feng, T., Pensheng, N. & Jette, A. (2012). A computer adaptive test to measure community reintegration of veterans. Journal of Rehabilitation Research and Development, 49(4), 557–566 [Article] [PubMed]×
Resnik, L. & Plow, M. A. (2009). Measuring participation as defined by the international classification of functioning, disability and health: An evaluation of existing measures. Archives of Physical Medicine and Rehabilitation, 90(5), 856–866 [Article] [PubMed]
Resnik, L. & Plow, M. A. (2009). Measuring participation as defined by the international classification of functioning, disability and health: An evaluation of existing measures. Archives of Physical Medicine and Rehabilitation, 90(5), 856–866 [Article] [PubMed]×
Resnik, L., Plow, M. & Jette, A. (2009). Development of CRIS: measure of community reintegration of injured service members. Journal of Rehabilitation Research and Development, 46(4), 469 [Article] [PubMed]
Resnik, L., Plow, M. & Jette, A. (2009). Development of CRIS: measure of community reintegration of injured service members. Journal of Rehabilitation Research and Development, 46(4), 469 [Article] [PubMed]×
Whiteneck, G. & Dijkers, M. P. (2009). Difficult to measure constructs: conceptual and methodological issues concerning participation and environmental factors. Archives of Physical Medicine and Rehabilitation, 90(11), S22–S35 [Article] [PubMed]
Whiteneck, G. & Dijkers, M. P. (2009). Difficult to measure constructs: conceptual and methodological issues concerning participation and environmental factors. Archives of Physical Medicine and Rehabilitation, 90(11), S22–S35 [Article] [PubMed]×
World Health Organization. (2001). International Classification of Functioning, Disability and Health (ICF). Geneva, Switzerland: World Health Organization.
World Health Organization. (2001). International Classification of Functioning, Disability and Health (ICF). Geneva, Switzerland: World Health Organization.×
World Health Organization. (2007). International Classification of Functioning, Disability and Health Children & Youth Version (ICF-CY). Geneva, Switzerland: World Health Organization.
World Health Organization. (2007). International Classification of Functioning, Disability and Health Children & Youth Version (ICF-CY). Geneva, Switzerland: World Health Organization.×