The research details a promising approach for pinpointing more athletes who play ‘impaired’ on the Immediate Post-Concussion Assessment and Cognitive Testing, or ImPACT, a computerized tool comprising of eight subtests that gauge neurocognitive performance. Administering impact in the preseason helps establish a cognitive baseline that can be compared against the results of a post-concussion test, informing decisions about whether and when an athlete returns to action.
Concussions results from the brain slamming against the skull, usually causing short-term troubles that some research suggests may evolve into long-term troubles like memory loss and depression when the brain is subjected to repeated trauma. To mitigate the risk of re-injury, athletes diagnosed with concussions take the ImPACT of a similar test to help determine when they have completely recovered.
But few athletes have undertaken the practice of sandbagging, giving lackadaisical effort on the baseline test to record a lower score in the hope of playing sooner after a concussion. Sandbagging can ruin the potentiality of the test and, because a recovering brain is more susceptible to further trauma, ultimately enhance the likelihood of another concussion.
“At this point, people managing ImPACT may not have too much training in neuropsychological testing or standardized test administration or data interpretation,” says head author Kathryn Higgins, a postdoctoral scientist with the Centre for Biology, Brain and Behavior at Nebraska. “If the baseline is the standard for when an athlete is recovered, there are all types of issues with returning someone to play based on poor baseline data.”
So Higgins performed an experiment to determine whether a statistical approach could identify more of the athletes who sandbagged on the baseline study. The experiment asked 54 athletes from rural Midwestern high schools to take the test twice once while giving their best effort and once while subtly sandbagging. After identifying the results, Higgins identified four subtests that created the biggest disparity in scores. She then introduced an equation that yielded a composite score from those subtests.
Establishing a threshold for the composite score enabled her to precisely find 100 percent of sandbagging cases while recognising the finest effort cases more than 90 percent of the time. Recent research states that ImPACT’s existing system of validity checks that flag suspicious scores on five individual subtests, detect just 65 to 70 percent of sandbaggers.
“Obviously, my flags are going to be better because I built them and tested them on the similar sample,” says Higgins, who conducted the research as part of her dissertation. “But I considered it was worth pointing out that this equation has strong ability as another way to identify poor effort on baseline testing.”
“There is so much space for the work to be done,” says Higgins. “We have come so far in the last 10 years, we know so much more than we did, but there are still a range of things that we don’t know.”
Filed Under: News