Exploring the Horizon — Episode 4

Platforms to build medical AI datasets and algorithms

The fourth episode of my podcast “Exploring the Horizon” is about platforms to build medical AI datasets and algorithms. I interviewed George Shih, who is a radiologist at the Weill Cornell Medical Center in New York City and electrical engineer. He’s also the founder of the MD.ai platform, a company that participated in the Kaggle contest. Besides the features and value of the platform, we discussed topics such as the value of making medical image data available, sharing them, and involving radiologists in this process. At the end there was a discussion about the value of blockchain for healthcare purposes.

Besides his radiology training, George Shih has a degree in computer science. He went on to study electrical engineering in Paris at the Ecole Nationale Supérieure des Télécommunications. Télécom Paris is one of the institutes in France that’s focused around computer science and telecommunication. Being able to do study there was a great opportunity for him, opening his eyes into the wider world of engineering outside of the U.S. and allowing him to learn more about neural networks, which is one of the most popular topics in computer science at this moment. The research of his supervisor was focused on random neural networks (RNN). For making his master’s thesis George stumbled across a medical dataset, so by coincidence his thesis involved medical imaging. For finishing his work he had to collaborate with a radiologist from Duke University, and this is where his interest in radiology started.

George Shih started with the idea of building a platform like MD.ai only few years ago because of the increasing popularity of deep learning and neural networks. In that time a lot of other people in the medical imaging space became interested using these types of algorithms, and he was trying to figure out how he could participate in this trend.

One of the first things they did was participating in the Kaggle Data Science Bowl. Kaggle is a data science website that holds competitions on machine learning. The Second Annual Data Science Bowl was on cardiac MRI, a scenario that George is familiar with. Essentially the competition was about how to automatically calculate the ejection fraction. From his fellowship in MRI he remembered the amount of work that was involved in segmenting out the ventricle, which needed a lot of manual effort. George and his team mate Leon Chen ended 14th in a group of approximately thousand teams.

Lessons learned

What they essentially learned from that competition was that, number one, deep learning is for real, because in a span of three months they were able to build something using deep learning that was as good or better than some of the tools that were being used in clinical practice. Being able to do this without resources and only for fun during nights and weekends, they realized that being able to come up with such good results was something to be very excited about, and that this was going to be the future of radiology.

Leaderboard of 2047 Kaggle Annual Data Science Bowl, with George Shih and Leon Chen ranked 6th

The second thing they learned was that in order to build algorithms, the hardest part of the process — assuming you have the skillsets and your team has them — was coming up with a high-quality annotated dataset to be able to create the algorithm. The insight that they gained was that the roadblock for a lot of the future algorithms was going to be around annotation.

“The insight that we gained from competing in Kaggle was that the roadblock for a lot of the future algorithms was going to be around annotation.”

They weren’t happy with the tools available in the market at that time, so they wanted to build something for themselves for building more algorithms. As they kept on building, they realized that everyone was going to have the same problem, so they decided to pivot to focusing on a cloud-based platform for annotation and to see how that goes. They participated in the third Data Science Bowl on Kaggle in 2017 for the lung cancer screening challenge and were able to use their annotation software. Being able to finish in the top 10 (6th place) in that competition validated the usefulness of the platform.

Focus on tools for handling data

So therefore MD.ai is focusing around the tools. They are hosting a lot of public datasets that are available to anyone, but they’re currently not a provider of data per se. Most of the customers come to them with the data they have acquired themselves or obtained access to, and they use the platform for the tools. Most of the users tend to be involved in medical imaging somehow, whether they’re academics or a part of larger companies or startups. The platform supports any kind of pixel data and because of that they are also branching outside of radiology in the medical market.

“We’re really trying to stay focused on the medical side and maybe something slightly adjacent to medical.”

The platform can also accommodate other specialties such as pathology, ophthalmology, dermatology and other, for example for pictures captured during clinical visits such as in women’s care. The platform not only excels as cloud-based type of service, but also with the support it offers for the medical imaging format DICOM. The platform is essentially made for making the tools available for every day consumers, even those who don’t know much about DICOM.

“We’re completely cloud-based so it’s very easy to have projects that involve lots of different experts, whether they’re physicians or engineers or other experts participating in the same project”.

The platform can accommodate the data in lots of different ways integrating with most of the major cloud storage companies, whether it’s Amazon or Google. If the user has its data already stored on the cloud, they can directly connect to that storage, which means that no uploading is needed, and that the user maintains control of his data. That turns out to be a very popular method because institutions want to maintain control of their data. It’s a lot easier for them to do that using this method.

Natural Language Processing

Speaking about Natural Language Processing (NLP) and how is this integrated in the MD.ai platform, George explained that NLP is used as a simplified way to create a cohort. The first part of Machine Learning is to figure out what your training set will look like, because you typically want to balance training sets so that it’s possible to accommodate the different possible outcomes of your algorithm. The NLP was created as a way for customers to see what kind of data is available in their system, by performing an NLP-based analysis of the existing data. So by doing a query on the existing database of radiology reports, the NLP-tool would index those reports. E.g. when searching for neural glioma’s, you would automatically have an index of the cases with neural glioma’s. That way you can build a dataset and then import it in the platform. Each user obtains a token to get access to an API for making annotations. Through a Jupyter environment they can access all the data including annotations and start either doing some data exploration or building an algorithm.

The data curation/data annotation is kind of a burden for some people because it asks some financial investment and it needs time to do it. For example in case you have a big dataset of for example 20.000 Chest X-rays with pneumothorax which you don’t want to annotate manually with expert humans for time reasons or because of insufficient financial means, you could only annotate a portion of those, maybe half, and train an algorithm to try to annotate the rest. The MD.ai API is very useful for that since it’s possible to import back the results of machine learning algorithms into MD.ai. This way the remainder of the pneumothorax dataset can be auto-annotated. It’s really semi-auto annotated because you really still have to make sure that output is what you want. But in theory, it can significantly reduce the time and effort.

Project management

MD.ai is also useful for project management and coordination. If you’re working with teams of 20 or 30 people distributed around multiple times zones, you might have to divide the annotation effort. For example you can have multiple experts read the same study and each person would be blinded to the other person’s annotations. There might even be maybe a third person who’s the adjudicator able to see all the annotations that were created and can decide on which one is the truth to feed the algorithm. The platform has also been used to crowdsource a few datasets, as was the case for the annual RSNA machine learning challenge. The 100,000 chest x-rays dataset on pneumonia from the NIH could be annotated by volunteers willing to participate .

Screenshot of pneumothorax dataset that was annotated as a part of a SIIM effort (subset of the NIH dataset)

Business model

The business model behind this platform based on selling or to licensing software — it is more of a B2B model. A contract is made with the customer, whether they’re an academic institution or a company. A price offer can be made by project , but most customers have an unlimited plan with a year-long agreement providing unlimited access, meaning that they can do as many projects as they want and use as much data as they want.

Strategies to fill the data gap

In reply to the question if there is a gap between those who can afford the data and those who can’t, George confirmed the existence of such a difference. He replied that there is a gap in terms of the institutions and companies that have a lot more access to data and a lot more funding to acquire that data versus others that are in a shortage of data and funds. Professional organizations such as RSNA and SIIM are trying to find ways to level the playing field a little bit — coming up with specific projects to release certain datasets publicly — and not only that, to provide annotations for these datasets. These publicly available datasets are immediately useful for this kind of algorithm development. Being involved with a lot of the society’s efforts George thinks that this is very important. It’s because AI is going to be such a big change in the way that we practice medicine. The more participation we can get from different people, different countries, the better chance we’ll have in succeeding to build the kind of AI that’s both important and relevant to take care of patients. These are all very important efforts.

“The more participation we can get from different people, different countries, the better chance we’ll have in succeeding to build the kind of AI that’s both important and relevant to take care of patients.”

France has a non-profit plan for developing artificial intelligence, in which a joint partnership of hospitals and private centers will make data available to develop algorithms. For this purpose a new association was created, called DRIM France IA. The agreement is that they give this data for free but in return they will get the algorithms for free as well, so that the availability of algorithm is equally divided. That’s the “French model” as George refers to it, which he thinks is an amazing concept. According to George other countries are now thinking about similar models, which he thinks makes a lot of sense. “If we only have people who have the resources to it — and it’s a lot of cost right now to acquire data and develop algorithms — then the future of AI in medicine at least will result in a very uneven playing field. I think the easier we make it for people to at least experiment with this technology and data, the better it will be,” George says.

“If we only have people who have the resources to it — and it’s a lot of cost right now to acquire data and develop algorithms — then the future of AI in medicine at least will result in a very uneven playing field. The easier we make it for people to at least experiment with this technology and data, the better it will be”.

Opinion of French radiologists on Artificial Intelligence

Availability of data in China

A somewhat more sensitive topic is the availability of huge amounts of data in countries like China, and the efforts of Chinese companies to launch algorithms developed with these data in the European and American market, which possibly could result in an overflow of algorithms coming from China.

“Countries that have more data have more AI expertise will definitely have an advantage.”

China is very well positioned in both aspects right now. They have lots and lots of data and lots and lots of talent in AI. The talent in AI can be acquired by most countries however, because the nice thing about the trend in the development of these algorithms is that the academic community has done a pretty good job of making everything freely available. From that point of view, the other countries only need to have the desire to train their engineers, and there should be no financial barrier to doing that. The main friction however is still around the tools and it’s not just the annotation tools, it’s lots of tools along the spectrum that’s missing. It’s still very hard to get data in and out of systems. Even if China has more data than anyone could possibly imagine, there are still challenges for them to even get access to that data because there may be systems lacking to archive and distribute the data. Another challenge lies in doing it safely, respecting the patient privacy.

“Even if China has more data than anyone could possibly imagine, there are still challenges for them to even get access to that data because there may be systems lacking to archive and distribute the data.”

Training of radiologists in AI

In reply to a question about the value of increasing the awareness among radiologists concerning the ongoing evolution, George answered that for increasing the development of AI in other countries and continents this is essential. Radiologists should know what’s going on and how they could participate in this development. He thinks we need change, we need to train radiologists and we have to include this automatically into the training. There’s already significant effort at least in the U.S. to start incorporating not only machine learning but also imaging informatics into the training of radiology residents.

“We need change, we need to train radiologists and we have to include this automatically into the training.”

In terms of machine learning, we’re going to see more and more data science types of curriculum into the training of radiologists because in 20, 30 or 40 years ahead — when on every CT scan there are a few dozen or a few hundred algorithms running — it will be very important to know which algorithms to trust. He says that “we don’t have to understand everything. Just like today we don’t understand everything our smartphone does and why it does it but having some basic understanding of data science it will be incredibly useful.”

“We don’t have to understand everything. Just like today we don’t understand everything our smartphone does and why it does it but having some basic understanding of data science it will be incredibly useful.”

George underlines that “There will be positions like chief AI officers”. It might be interesting if each radiology department would have at least one radiologist focusing on this topic so that investment decisions can be made or that the right strategies are being chosen, etc. In other companies people are talking about a chief AI officer, even more specific than just a chief technology officer, someone who’s really focused on AI.

There will be positions like chief AI officers”.

The barrier to building these kinds of applications will be so low that most doctors will be able to do it themselves. “If you think about it, it makes perfect sense because the doctors are really in the position to know what problems they want to solve.” They have at least some access to the data already.

They may not always create a commercial algorithm that is used around the world but they’re really in the best position to find the right use cases and start building prototypes to see if this would actually be useful. Eventually we’ll see that happening. That’s another reason why radiologists should, during their training, have enough machine learning expertise.

George also sees analogy in the way radiologists are taught how to use and interpret MRI sequences and the way radiologists should be trained in using AI. When a radiologist publishes an article on a better way to do renal MRA, other radiologists can read his/her article. They can essentially change their protocols to accommodate for these new findings that were published. We will see that in AI as well. Someone will come in, create a new algorithm or an architecture that ends up being better and so the radiologist should be able to accommodate for that. So, the radiologist will get a better accuracy if he/she uses this new architecture.

Blockchain for radiology

Blockchain

Blockchain is also a topic that’s currently being discussed when we talk about sharing data and medical images. It’s a technique that’s considered as a way for giving patients the possibility to do this themselves. There are even blockchain-based platforms like Embleema.com where people are invited to donate their medical data for a fee, so that the data are available for research for developing AI algorithms. George agrees that here’s a lot of excitement about it, and that the technology is really cool (he did a few tutorials himself on how to create his own blockchain). In terms of it actually being utilized in healthcare, George sees a role for some kind of technology such as blockchain, i.e. something that’s distributed, something that will always have records on who did what with the data. He’s not 100% convinced however that it has to be blockchain or that it has to be done on the current data, because there’s much inertia in changing anything in the healthcare space.

“There’s much inertia in changing anything in the healthcare space.”

He wonders what the hope is for converting past data to blockchain since the systems that are in use — and some of them are decades old — should be completely changed. There’s quite a bit of built-in inertia but there’s also a high cost to that.

“I see maybe adoption of this kind of technology in a new setting instead of trying to convert our current data retrospectively” .

If you think about a system where you can upload your study, to have it automatically analysed using certain algorithms, sending you back a result, than this would be totally different from the workflow that is used today in clinical practice. In that scenario however, you could imagine that some kind of blockchain technology is needed, because then you’d be not depended on all the current systems that would have to change their workflow to accommodate that. You would be using it outside the normal workflow. So Georges thinks that the adoption of this kind of technology would be more reasonable in a new setting instead of trying to convert the current data retrospectively. If the data are collected in a blockchain model, it may be easier for those willing to use the data and able to use the data, to get them altogether to get access to the data in an integrated way. Using it this way, it might make it easier to get access to that data or find the data that you need. In the end, it’s a very interesting technology and we should continue to experiment with it. Large-scale adoption at least in more mature healthcare systems is very difficult. Maybe if there’s a developing country that really doesn’t have any existing infrastructure that would be a reasonable place to start.

“In the end, it’s a very interesting technology and we should continue to experiment with it. Large-scale adoption at least in more mature healthcare systems is very difficult.”

Conclusion

From this episode we can conclude that:

  1. MD.ai was initiated to accelerate the application of AI in medicine with a specific focus on medical imaging. It is a comprehensive medical AI platform spanning the entire pipeline from data processing and labeling, to machine learning model training, deployment, and clinical validation. The platform is committed to enabling people and organizations around the world build meaningful medical AI applications.
  2. Countries, academic institutions, hospitals and radiological societies should engage more actively in developing AI solutions for healthcare. The more participation we can get from different people, different countries, the better chance we’ll have in succeeding to build the kind of AI that’s both important and relevant to take care of patients.
  3. China might have more data than anyone could possibly imagine, but there are still challenges for them, even to get access to that data, because not all systems are able to archive and distribute the data.
  4. The education of radiologists should be adapted to the changing environment so that they be more actively involved in the definition of helpful clinical use cases for algorithms and the development of the right AI applications.
  5. The blockchain technology is an interesting technology for healthcare applications but large-scale adoption will be rather difficult and slow.

read original article at https://medium.com/@erik.ranschaert/exploring-the-horizon-episode-4-837d43b729a2?source=rss——artificial_intelligence-5