The stakes are even bigger for real-time diagnoses, where doctors always experience time pressure. That is why scientists have attempted since the 1960s to supplement doctor’s memory along with decision making skills with computer – aided diagnostic aids. In 2012, for instance, IBM pitted a sort of its Jeopardy, hence, winning artificial intelligence, Watson, against questions from the Doctor’s Dilemma. However, Big Blue’s brainiac could not imitatethe overwhelming success it had against the human Jeopardy players.
The issue is computer-aided diagnosis aids do not still measure up to the performance of individual doctors, according to multiple recent studies. Nor can makers of such software pretend to agree on a singular benchmark by which to estimate performance. Utilizing reports on this software in the peer-reviewed literature, one group of researchers identified wide performance variations across distinct ailments, as well as varying usage patterns among doctors.
For instance, younger doctors are likelier to spend some time placing more patient information into a tool and likelier to advantage from the assistance. Two presentations at the 6 – 8 November Diagnostic Error in Medicine Conference in Hollywood Calif, antagonised the trouble of how to realistically incorporate technological assistance into the doctor training and problematic diagnosis routines.
Another trouble is identifying how to compare distinct software aids. “If you search for, for instance, the huge advancement that has occurred in speed identification or in image classification, it has really been brought about by possessing truly good benchmark data sets and truly like possessing actual competitions,” says the computer researcher Ole Winther at the Technical University of Denmark in Lyngby. “We even do not have the same in the medical domain.”
Despite the constraint of a linked benchmark for computer – aided diagnostics, singular doctors, family members of mis-diagnosed patients and the clinical and academic groups have created and are marketing such aids. Clients comprise private health insurance entities and research hospitals around the world among them a couple of medical facilities in Japan and North Carolina that have reported little success diagnosing patients with the Watson. Still, at a current IBM study, one of the clients of IBM, Jens-Peter Neuman of the Rhon-Klinkikum hospital network in Germany confirms that it is too early to measure the potential expenditure savings in his group’s Watson collaboration.
Till now, the group have opted to utilize a combination of medical taxonomies, like ICD10 and MedDRA to illustrate diagnoses and symptoms. He also notes that little times the knowledge sources fed into Watson challenge each other. “But, such reflects the variation of the knowledge base of Watson and is not distinct than possessing a room full of doctors with distinct backgrounds and varying opinions,” says Muller.
Filed Under: News