The views, information, or opinions expressed in the Industry News RSS feed belong solely to the author and do not necessarily represent those of IDEX Health & Science and its employees.
Article obtained from Photonics RSS Feed.
A team of investigators from Massachusetts General Hospital (MGH) has developed a system that uses artificial intelligence (AI) to diagnose and classify brain hemorrhages. The system uses relatively small image data sets to form its decisions.
To train their system, the researchers began with 904 computed tomography (CT) scans of the head, each consisting of around 40 individual images that were labeled by a team of MGH neuroradiologists as to whether they depicted one of five hemorrhage subtypes or no hemorrhage. To improve the system’s accuracy, the team built in steps mimicking the way radiologists analyze images.
Once the model system was created, the investigators tested it on two separate sets of CT scans — a retrospective set taken before the system was developed, consisting of 100 scans with and 100 without intracranial hemorrhage, and a prospective set of 79 scans with and 117 without hemorrhage, taken after the model was created. In its analysis of the retrospective set, the model system was as accurate in detecting and classifying intracranial hemorrhages as the radiologists who had reviewed the scans had been. In its analysis of the prospective set, the system proved to be even better than nonexpert human readers.
These images show the system’s ability to explain its diagnosis of subarachnoid (left above) and intraventricular (left below) hemorrhage by displaying images with similar appearances (right) from an atlas of images used to train the system. Courtesy of Hyunkwang Lee, Harvard School of Engineering and Applied Sciences, and Sehyo Yune, M.D., Massachusetts General Hospital Department of Radiology.
One obstacle to integrating machine learning systems into clinical decision-making is the “black box” problem, that is, the inability of AI systems to explain how they arrived at a decision. The U.S. Food and Drug Administration requires any decision support system used in a clinical setting to provide data allowing someone to review the reasons behind its findings. Another obstacle, the researchers said, is the need for large and well annotated data sets.
To solve the “black box” problem, the team had the system review and save the images from the training data set that most clearly represented the classic features of each of the five hemorrhage subtypes. Using this atlas of distinguishing features, the system is able to display a group of images similar to those of the CT scan being analyzed in order to explain the basis of its decisions.
Sehyo Yune, M.D., said, “Some critics suggest that machine learning algorithms cannot be used in clinical practice, because the algorithms do not provide justification for their decisions.” Yune said that it was necessary for the team to overcome this challenge, as well as the need for a large data set — which would have been expensive and time-consuming to develop — “to facilitate the use in health care of machine learning, which has an immense potential to improve the quality of and access to care.”
Such a system could become a tool for hospital emergency departments evaluating patients with symptoms of a stroke, allowing rapid application of the correct treatment. “Many facilities do not have access to specially trained neuroradiologists — especially at night or over weekends — which can require nonexpert providers to determine whether or not a hemorrhage is the cause of a patient’s symptoms. The availability of a reliable, ‘virtual second opinion’ — trained by neuroradiologists — could make those providers more efficient and confident and help ensure that patients get the right treatment,” Michael Lev, M.D, said.
Shahein Tajmir, M.D., said that the next step would be to deploy the system into clinical areas and further validate its performance with many more cases. “We are currently building a platform to allow for the widespread application of such tools throughout the department. Once we have this running in the clinical setting, we can evaluate its impact on turnaround time, clinical accuracy, and the time to diagnosis.”
The research was published in Nature Biomedical Engineering (https://doi.org/10.1038/s41551-018-0324-9).READ MORE