Categories
Recommended

qScout – Strengthening Global Vaccine Programs

Why an agile monitoring and management system is the need of the hour

The world is seeing unprecedented times. Two relentless years of a pandemic is enough to break even the strongest healthcare systems, that along with the current efforts to ramp up vaccinations and clinical care has resulted in most countries public health system being strained or barely functional. Such accelerated development and roll out of multiple vaccines, is a first for any disease.It is important that each of these vaccines and its effects are monitored for substantial periods of time to understand short term and long-term effects on varying demographic and risk factor profiles of vaccine recipients.

What is Active Vaccine Safety Surveillance (AVSS)?

The traditional surveillance systems in most countries, rely heavily on health care providers to notify the adverse events. This is a passive surveillance system that helps in detecting unsolicited adverse events Vaccine Survey is another conventional method, but the disadvantage is that it usually is a cross sectional survey with only a one time follow up. One of the ways to augment these traditionalsurveillance systems is to empower the vaccine recipient using smartphone based digital tools. AVSS or Active Vaccine Safety Surveillance systems helps by proactively enrolling many vaccine recipients who are followed up for all minor and major adverse events. This can significantly alleviate the burdens off frontline workers, while capturing large amount of data, frequently and in a timely fashion. Besides allowing the healthcare systems to address immediate or delayed adverse events, this has the potential to monitor the health of the community in the long term as well.

Policy formulation has also been extremely difficult for governments and world organisations given the novelty of the disease. This solution could allow for faster data driven decision making empowering governments and policy makers in a way that only technology can.

Need for AVSS in Covid-19

Post Vaccine monitoring: Before COVID–19, vaccines used to be licensed after 4-15years of rigorous clinical trials. With the fast-tracked development of COVID-19 vaccines, there is a likelihood that some of the rare and long-term adverse events may have gone undetected in the clinical trials. Through AVSS via phone, the vaccine recipients can be monitored for a period ranging from 07 days up to 12 months, getting real time alerts for any serious adverse events following immunization (AEFI)or Adverse Events of Special Interest (AESI). By automating this process, we have successfully tracked the symptoms fast enough to be of actionable value; the healthcare worker getting involved only if necessary.

Large Data collection and Analysis: we need interoperable systems that can harmonise data from multiple sites, with a validated AI algorithm to measure the risk of AEFIs and their early indicators. The system will need to be agile and scalableto work in varying resource settings.

Country-level Surveillance: There must be a centralised dashboard for policy makersand regulatory authorities to visualize community vaccine uptake statistics, AEFI patterns and efficacies.

qScout for AVSS monitoring

qScout is Qure.ai’s Artificial Intelligence and NLP-powered solution that improves vaccine recipient’s experience while augmenting traditional surveillance systems forindividual’s health monitoring. It has a smartphone-based component for easy interaction between the recipient and public health professionals.

How can qScout be used for active surveillance and monitoring of vaccinees?

Step 1: Walk-in/registered individuals at COVID-19 vaccination sites will be enrolled using qScout EMR by recording the following details :

  • Personal Identifiers
  • Risk groups
  • Medication history
  • Name and details of the vaccine administered.

QScout monitor demo

Step 2: Once the enrollment is completed, the vaccinated person receives a message on the mobile for their consent. Follow-up messages will be sent for a set period to check for any adverse/unexpected symptoms (AEFIs or AESI). The person will also be reminded about the second dose. Every enrolled individual will be monitored for a predefined period , as per the guidelines of the proposed project.

Step 3: Public health officials who have access to the data can see the analysis of the AEFIs OR AESIs on a real time dashboard The information will be segregated based on demographics, type of vaccine administered, count of individuals administered with dose 1 and/or dose 2 as well as percent drop-out between both the doses.

Benefits of Real-time remote patient monitoring after vaccination

  1. Early and timely detection and notification of serious AEFIs or AESIs
  2. Detection of rare and unknown adverse events, that may have not been detected during the clinical trials
  3. Recipient risk profiling and Predictive adverse events scoring or modelling
  4. Long term adverse effects of vaccination
  5. Identifying re-infection probabilities and severity
  6. Monitoring of Vaccine administration SOP adherence and Pharmacovigilance/ Post Market Surveillance for vaccine manufacturer

Prior Experience:

Country wide contact tracing and remote patient management: A Case Study

During the first wave of the COVID-19 pandemic, the qScout platform was adopted for national contact tracing and management mechanism by Ministry of Health in Oman. Within a span of a few weeks, qScout was integrated with Tarassud plus, the country’s ICT platform for surveillance and monitoring. qScout used AI chatbot customised to the local languages and engaged with confirmed cases capturing their primary and secondary symptoms. The AI engine analysed the information and provided insights enabling virtual triaging and timely escalation for medical requirements. Over a span of 8 moths, approximately 400,000 Covid-19 patients under quarantine in Oman regularly interacted with a software for over thousands of sessions taking off a significant proportion of healthcare workers’ burden. All this while the health authorities and government actively kept a watch centrally to monitor hotspot regions, areas needing additional resources and so on. Having qScout enabled with multi-lingual support in English as well as Arabic helped increase the ease of interaction for various users.

The software was deployed with a gadget that relayed instant reports to the competent authorities about the movements and locations that a quarantined or infected person visits. It also had the capability to send alerts if this person left their location or tried taking it off. This level of data collection allowed sharing relevant insights with the Ministry of Health about population level statistics vital for planning for resources. This coupled with Qure.ai’s qSCOUT, was a true exemplar of use of technology to tackle the pandemic.

Way Forward:

There are multiple studies that are ongoing with regional and state governments as well as non-governmental organizations. qScout is designed as a platform for monitoring safety and efficacy of all adult and pediatric vaccines and medications.

Categories
Recommended

Engineering Radiology AI for National Scale in the US

vRad, a large US teleradiology practice and Qure.ai have been colloborating for more than an year for a large scale radiology AI deployment. In this blog post, we describe the engineering that goes into scaling radiology AI. We discuss adapting AI for extreme data diversity, DICOM protocol and software engineering.

vRad and Qure.ai have been collaborating on a large-scale prospective validation of qER, Qure.ai’s ICH model for detecting intracranial hemorrhages (ICH) for more than a year. vRad is a large teleradiology practice – 500+ radiologists serving over 2,000 facilities in the United States – representing patients from nearly all states. vRad uses an in-house built RIS and PACS that processes over 1 million studies a month, with the majority of those studies being XR or CT. Of these, about 70,000 CT studies a month get processed by qure.ai’s algorithms. This collaboration has produced interesting insights into the challenges of implementing AI on such a large scale. Our earlier work together is published elsewhere at Imaging Wire and vRad’s blog.

Models that are accurate on extremely diverse data

Before we discuss the accuracy of models, we have to start with how we actually measure it at scale. In this respect, we have leveraged our experience from prior AI endeavors. vRad runs the imaging models during validation in parallel with production flows. As an imaging study is ingested into the PACS, it is sent directly to validation models for processing. In turn, as soon as the radiologist on the platform completes their report for the scan, we use it to establish the ground truth. We used our Natural Language Processing (NLP) algorithms to automatically read these reports to assign whether the current scan is positive or negative for ICH. Thus, the sensitivity and specificity of a model can be measured in real-time this way on real-world data.

AI models often perform well in the lab, but when tried in a real-world clinical workflow, it does not live up to expectations. This is a combination of problems. The idea of a diverse, heterogeneous cohort of patients is well discussed in the space of medical imaging. In this case, Qure.ai’s model was measured with a cohort of patients representative of the entire US population – with studies from all 50 states flowing through the model and being reported against.

Less commonly discussed are the challenges with the uniqueness of data that is a hospital or even imaging device-specific. vRad receives images from over 150,000 unique imaging devices in over 2,000 facilities. At a study level, different facilities can have many different study protocols – varying amounts of contrast, varying radiation dosages, varying slice thicknesses, and other considerations can change how well a human radiologist can evaluate a study, let alone the AI model.

Just like human radiologists, AI models do their best if they see consistent images at pixel level despite the data diversity. Nobody would want to recalibrate their decision process just because different manufacturers chose to use different post-processing techniques. For example, image characteristics of a thin slice CT scan are quite different from a 5mm thick scan with the former being considerably noisier. Both AI and doctors are sure to be confused if asked to decide whether those subtle hyperdense dots that they see on a thin slice scan are just noise or symptoms of diffuse axonal injury. Therefore, we invested considerably in making sure the diverse data is pre-processed into highly consistent raw pixel data. We discuss more in the following section.

A thin slice CT (left) vs a thick slice one (right)

A thin slice CT (left) vs a thick slice one (right)

DICOM, AI, and interoperability

Dealing with patient and data diversity are major components of AI models. The AI model not only has to be generalizable at the pixel level, but it also must make sure the right pixels are fed into it. The first problem is highly documented in the AI literature but the second one, not so much. As traditional AI imaging models are trained to work on natural images (think cat photos), they deal with simplistic data formats like PNG or JPEG. However, medical imaging is highly structured and complex and contains orders more data compared to natural images. DICOM is the file format and standard used for storing and transfer the medical images.

While DICOM is a robust and well-adopted standard, implementation details vary. Often DICOM tags differ greatly from facility to facility, private tags vary from manufacturer to manufacturer, encodings and other imaging-device specific differences in DICOM require that any piece of software, including an AI model, be robust and good at error handling. After a decade of receiving DICOM from all over the U.S., the vRad PACS still runs into new unique configurations and implementations a few times a year, so we are uniquely sensitive to the challenges.

A taste of DICOM diversity: shown are random study descriptions used to represent CT brain

A taste of DICOM diversity: shown are random study descriptions used to represent CT brain

We realized that we need another machine learning model to solve this interoperability problem itself. How do we recognize that this particular CT image is not a brain image even if the description of images says so? How do we make sure the complete brain is present in the image before we decide there is a bleed in it? Variability of DICOM metadata doesn’t allow us to write simple rules which can work at scale. So, we have trained another AI model based on metadata and pixels which can make the above decisions for us.

These challenges harken back to classic healthcare interoperability problems. In a survey by Philips, the majority of younger healthcare professionals indicated that improved interoperability between software platforms and healthcare practices is important for their workplace satisfaction. Interestingly, these are the exact challenges medical imaging AI has to solve for it to work well. So, AI generalizability is just another name for healthcare interoperability. Given how we used machine learning and computer vision to solve the interoperability problems for our AI model, it might be that solving wider interoperability problems might involve AI itself.

AI Software Engineering

But even after those generalizability/interoperability challenges are overcome, a model must be hosted in some manner, often in a docker-based solution, frequently written in Python. And like the model, this wrapper must scale the solution. It must handle calls to the model and returning results, as well as logging information for the health of the system just like any other piece of software. As a model goes live on a platform like vRad’s, common problems that we see happen are memory overflows, underperforming throughput, and other “typical” software problems.

Although these problems look quite similar to traditional “software problems”, the root cause is quite different. For the scalability and the reliability of traditional software, the bottleneck usually boils down to database transactions. Take Slack, an enterprise messaging platform, for example. What’s the most compute-intensive thing Slack app does? It looks up the chat typed previously by your colleague from a database and shows it to you. Basically, a database transaction. The scalability of Slack usually means scalability and reliability of these database transactions. Given how databases have been around for years, this problem is fairly well solved with off-the-shelf solutions.

For an AI enabled software, the most compute intensive task is not a database transaction but running of an AI model. And this is arguably more intensive than a database lookup. Given how new deep learning is, the ecosystem around it is not yet well-developed. This make AI model deployment and engineering hard and it is being tackled by big names like Google (Tensorflow), Facebook (Torch), and Microsoft (ONNX). Because these are opensource, we actively contribute to them and make them better as we come across problems.

As different is the root cause of the engineering challenges, the process to tackle them is surprisingly similar. After all, engineers’ approach to building bridges and rockets is not all that different, they just require different tools. To make our AI scale to vRad, we followed traditional software engineering best practices including highly tested code and frequent updates. As soon as we identify an issue, we patch it up and write a regression test to make sure we never come across it again. Docker has made deployment and updates easy and consistent.

Automated slack alerts

We get automated alerts of the errors and fix them proactively

Integration to clinical workflow

Another significant engineering challenge we solved is to bend clinical software to our will. DICOM is a messy communication standard and lacks some important features. For example, DICOM features no acknowledgement signal that the complete study has been sent over the network. Another great example is the lack of standardization in how a given study is described – what fields are used and what phrases are used to describe what the study represents. The work Qure.ai and vRad collaborated on the required intelligent mapping of study descriptions and modality information throughout the platform – from the vRad PACS through the Inference Engine running the models to the actual logic in the model containers themselves.

Many AI image models and solutions on the market today integrate with PACS and Worklists, but one unique aspect of Qure.AI and vRad’s work is the sheer scale of the undertaking.  vRad’s PACS ingests millions of studies a year, around 1 billion individual images annually. The vRad platform, including the PACS, RIS, and AI Inference Engine, route those studies to the right AI models and the right radiologists, radiologists perform thousands of reads each night, and NLP helps them report and analyze those reports for continual feedback both to radiologists as well as AI models and monitoring.  Qure.AI’s ICH model plugged into the platform and demonstrated robustness as well as impressive sensitivity and specificity.

During vRad and Qure.ai’s validation, we were able to run hundreds of thousands of studies in parallel with our production workloads, validating that the model and the solution for hosting the model was able to not only generalize for sensitivity and specificity but overcome all of these other technical challenges that are often issues in large-scale deployments of AI solutions.

Categories
Recommended

Re-purposing qXR for COVID-19

In March 2020, we re-purposed our chest X-ray AI tool, qXR, to detect signs of COVID-19. We validated it on a test set of 11479 CXRs with 515 PCR-confirmed COVID-19 positives. The algorithm performs at an AUC of 0.9 (95% CI : 0.88 - 0.92) on this test set. At our most common operating threshold for this version, sensitivity is 0.912 (95% CI : 0.88 - 0.93) and specificity is 0.775 (95% CI : 0.77 - 0.78). qXR for COVID-19 is used at over 28 sites across the world to triage suspected patients with COVID-19 and to monitor the progress of infection in patients admitted to hospital

The emergence of the COVID-19 pandemic has already caused a great deal of disruption around the world. Healthcare systems are overwhelmed as we speak, in the face of WHO guidance to ‘test, test, test’ [1]. Many countries are facing a severe shortage of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests. There has been a lot of debate around the role of radiology — both chest X-rays (CXRs) and chest CT scans — as an alternative or supplement to RT-PCR in triage and diagnosis. Opinions on the subject range from ‘Radiology is fundamental in this process’ [2] to ‘framing CT as pivotal for COVID-19 diagnosis is a distraction during a pandemic, and possibly dangerous’ [3].

Role of Radiography

The humble chest X-ray has emerged as the frontline screening and diagnostic tool for COVID-19 infection in a few countries and is used in conjunction with clinical history and key blood markers such as C-Reactive Protein (CRP) test and lymphopenia [4]. Ground glass opacities and consolidations which are peripheral and bilateral in nature are attributed to be the most common findings with respect to COVID related infections on CXRs and chest CTs. CXRs can help in identifying COVID-19 related infections and can be used as a triage tool in most cases. In fact, Italian and British hospitals are employing CXR as a first-line triage tool due to high RT-PCR turnaround times. A recent study [5] which examined CXRs of 64 patients found that in 9% of cases, initial RT-PCR was negative whereas CXRs showed abnormalities. All these cases subsequently tested positive for RT-PCR within 48 hours. The American college of Radiology recommends considering portable chest X-rays [6] to avoid bringing patients to radiography rooms. The Canadian Association of Radiologists suggest the use of mobile chest X-ray units for preliminary diagnosis of suspected cases [7] and to monitor critically ill patients, but have reported that no abnormalities are seen on CXRs in the initial stages of the infection.

BSTI algorithm

Radiology decision tool for suspected COVID-19 – The British Society of Thoracic Imaging [8]

As of today, despite calls for opening up imaging data on COVID-19 and outstanding efforts from physicians on the front-lines, there are limited X-ray or CT datasets in the public domain pertaining specifically to COVID. These datasets remain insufficient to train an AI model for COVID-19 triage or diagnosis but are potentially useful in evaluating the model – provided the model hasn’t been trained on the same data sources.

Building and evaluating qXR for COVID-19

Over the last month, customers, collaborators, healthcare providers, NGOs, state and national governments have reached out to us for help with COVID detection on chest X-rays and CTs.

In response, we have adapted our tried-and-tested chest X-ray AI tool, qXR to identify findings related to COVID-19 infections. qXR is trained using a dataset of 2.5 million chest X-rays (that included bacterial and viral pneumonia and many other chest X-ray findings) and is currently deployed in over 28 countries. qXR detects the following findings that are indicative of COVID-19: Opacities and Consolidation with bilateral and peripheral distribution and the following findings that are contra-indicative of COVID-19: hilar enlargement, discrete pulmonary nodule, calcification, cavity and pleural effusion.

These CE-marked capabilities have been leveraged for a COVID-19 triage product that is highly sensitive to COVID-19 related findings. This version of qXR gives out the likelihood of a CXR being positive for COVID-19, called Covid-19 Risk. Covid-19 Risk is computed using a post processing algorithm which combines the model outputs for the above mentioned findings. The algorithm is tuned on a set of 300 COVID-19 positives and 300 COVID-19 negatives collected from India and Europe.

Most new qXR users for COVID-19 are using it as a triage tool, often in settings with limited diagnostic resources. This version of qXR also localizes and quantifies the affected region. This capability is being used to monitor the progression of infection and to evaluate response to treatment in new clinical studies.

qXR sample

Sample Output of qXR [9]

Evaluation of the algorithm

We have created an independent testset of 11479 CXRs to evaluate our algorithm. The WHO [10] recommends a confirmatory diagnosis of COVID-19 using Reverse-Transcriptase Polymerase Chain Reaction (RT-PCR) – a specialised Nucleic Acid Amplification Test (NAAT) which looks for unique signatures using primers designed for the COVID-19 RNA sequence. Positives in this test set are defined as any CXR that is acquired while the patient has tested positive on RT-PCR test based on sputum/ lower respiratory and or upper respiratory aspirates/throat swab samples for COVID-19. Negatives in this test set are defined as any CXR which was acquired before the first case of COVID-19 was discovered.

The size of the negative set relative to the positive set was set to match the available prevalence in the literature [11]. The test set has 515 positives and 10964 negatives. Negatives are sampled from an independent set 250,000 CXRs. Negative set has 1609 cases of bilateral opacity and 547 cases of pulmonary consolidation in it (findings which are indicative of COVID-19 on a CXR), where the final diagnosis is not COVID-19. Negative set also has 355 non-opacity related abnormalities. This allowed us to evaluate algorithms ability to detect non COVID-19 opacities and findings, and is used to suggest alternative possible etiology and rule out COVID-19. We have used Area under Receiver Operating Characteristic Curve (AUC) along with Sensitivity and Specificity at the operating point to evaluate the performance of our algorithm.

CharacteristicValue

Number of scans11479
Positives515
Negatives10964
Normals9000
Consolidation547
Opacities1609
Other Abnormalities355

Test set demographics

A subset (1000 cases) of this test set was independently reviewed by radiologists to create pixel level annotations to localize opacity and consolidation. Localization and progression monitoring capability of qXR is validated by computing the Jaccard Index between algorithm output and radiologist annotations.

Metrics

To detect signs of COVID-19, We have observed an AUC of 0.9 (95% CI: 0.88 - 0.92) on this test set. At the operating threshold, we have observed the sensitivity to be 0.912 (95% CI : 0.88 - 0.93) and specificity to be 0.775 (95% CI : 0.77 - 0.78). While there are no WHO guidelines yet for an imaging based triage tool for COVID, WHO recommends a minimum sensitivity and specificity of 0.9 and 0.7 for community screening tests for Tuberculosis [12], which is a deadly infectious disease in itself. We have observed a Jaccard index of 0.88 between qXR’s output and expert’s annotations.

ROC Curve

Receiver Operating Characteristic Curve

Deploying qXR for COVID-19

qXR is available as a web-api and can be deployed within minutes. Built using our learnings of deploying globally and remotely, it can interface with a variety of PACS and RIS systems, and is very intuitive to interpret. qXR can be used to triage suspect patients in resource constrained countries to make effective use of RT-PCR test kits. qXR is being used for screening and triage at multiple hospitals in India and Mexico.

San Raffaele Hospital in Milan, Italy has deployed qXR to monitor patients and to evaluate patient’s response to treatments. In Karachi, qXR powered mobile vans are being used at multiple sites to identify potential suspects early and thus reducing burden on the healthcare system.

qXR deployments

Timeline of qXR for COVID

In the UK, all the suspected COVID-19 patients presenting to the emergency department are undergoing blood tests and CXR [4]. This puts a tremendous amount of workload on already burdened radiologists as it becomes critical for radiologists to report the CXRs urgently. qXR, with its ability to handle huge workloads, provides significant value in such a scenario and thus reduce the burden on radiologists.

qXR can also be scaled for rapid and extensive population screening. Frontline clinicians are increasingly relying on chest X-rars to triage the sickest patients, while they await RT-PCR results. When there is high clinical suspicion for COVID-19 infection, the need for a patient with positive chest X-ray to get admitted in a hospital is conceivable. qXR can help solve this problem at scale.

qXR deployments

Impact of qXR for COVID-19

Work with us

With new evidence published every day, and evolving guidance and protocols adapting in suit for COVID-19, national responses globally remain fluid. Singapore, Taiwan and South Korea have shown that aggressive and proactive testing plays a crucial role in containing the spread of the disease. We believe qXR can play an important role in expanding screening in the community to help reduce the burden on healthcare systems. If you want to use qXR, please reach out to us.

References

  1. WHO Director-General’s opening remarks at the media briefing on COVID-19 – WHO, Accessed Apr 9, 2020.
  2. Imaging the coronavirus disease COVID-19 – Healthcare in Europe Website, Accessed Apr 9, 2020.
  3. Hope et al. A role for CT in COVID-19? What data really tell us so far – The Lancet, Mar 27, 2020
  4. Lessons from the frontline of the covid-19 outbreak – BMJ Blog, Accessed Apr 9, 2020.
  5. Wong et al. Frequency and Distribution of Chest Radiographic Findings in COVID-19 Positive Patients – Radiology, Mar 27, 2020.
  6. ACR Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection – ACR, Accessed Apr 9, 2020.
  7. Lei et al. COVID-19 Infection: Early Lessons – Canadian Association of Radiologists Journal, Mar 12, 2020.
  8. Radiology decision tool for suspected COVID-19 – The British Society of Thoracic Imaging, Accessed Apr 9, 2020.
  9. Cohen et al. COVID-19 image data collection – arXiv:2003.11597, 2020
  10. Laboratory testing for 2019 novel coronavirus (2019-nCoV) in suspected human cases – WHO, Accessed Apr 9, 2020.
  11. Verity et al. Estimates of the severity of coronavirus disease 2019: a model-based analysis – The Lancet Infectious Diseases, Mar, 2020.
  12. High priority target product profiles for new tuberculosis diagnostics: report of a consensus meeting, tech. rep., World Health Organization, Apr 28-29, 2014.

Categories
Recommended

Scaling up TB screening with AI: Deploying automated X-ray screening in remote regions

We have been deploying our deep learning based solutions across the globe. qXR, our product for automated chest X-ray reads, is being widely used for Tuberculosis screening. In this blog, we will understand the scale of the threat that TB presents. Thereafter, taking one of our deployments as a case study, we will explain how artificial intelligence can help us in fighting TB.

Qure.ai’s deep learning solutions are actively reading radiology images in over 82 sites spread across 12 countries. We have processed more than 50 thousand scans till date. One of the major use cases of our solutions is for fast-tracking Tuberculosis (TB) screening.

Understanding TB

TB is caused by bacteria called Mycobacterium tuberculosis and it mostly affects the lungs. About one-fourth of the world’s population is infected by the bacteria inactively – a condition called latent TB. TB infection occurs when a person breathes in droplets produced due to an active TB person’s coughing, sneezing or spitting.

TB is a curable and preventable disease. Despite that, WHO reports that it is one of the top 10 causes of deaths worldwide. In 2017, 10 million people fell ill with TB, out of which 1.6 million lost their lives. 1 million children got affected by it, with 230,000 fatalities. It is also the leading cause of death among HIV patients.

Diagnosis of TB

There are many tests to detect TB. Some of them are as follows:

  • Chest X-ray: Typically used to screen for signs of TB in the lungs. They are a sensitive and inexpensive screening test, but may pick up other lung diseases too. So chest X-rays are not used for a final TB diagnosis. The presence of TB bacteria is confirmed using a bacteriological or molecular test of sputum or other biological sample.
  • Sputum tests: The older AFB sputum tests (samples manually viewed through a microscope looking for signs of bacteria) are still used in low-income countries to confirm TB. A more sensitive sputum test that uses DNA amplification technology to detect traces is now in wide use to confirm TB – it’s not only more sensitive, but also can also look for TB resistance. Tests like Genexpert and TrueNat fall under this category. These are fairly expensive tests.

Molecular tests have shown excellent results in South Africa and are generally considered as the go-to test for TB. However, their high costs make it impossible to conduct them for every TB suspect.

Failure in early detection

Due to the high costs of molecular tests, Chest X-rays are generally preferred as a pre-test for TB suspects. Post that, sputum or molecular tests are performed for confirmation. In regions where these confirmatory tests are not available, Chest X-rays are used for final diagnosis.

Having understood the X-rays’ key role in TB diagnosis, it is important to note that there is a huge dearth of radiologists to read these X-rays. In India alone, 80 million chest X-rays are being captured every year. There aren’t enough radiologists to read them within acceptable timelines. Depending upon the extent of shortage for radiology expertise, it can take anywhere between 2 to 15 days for the report to arrive. As a result, critical time is lost for a TB patient which prevents its early detection. A failure in detecting it early is not only hazardous for the patient but also enhances the risk of its transmission to others.

Moreover, the error rates in reading these X-rays lie around 25-30%. Such errors can prove to be fatal for the patient.

TB diagnosis

Where Qure.ai comes into the picture

This large gap between the number of TB incidences and the number of timely & accurately reported cases is a major reason why many lives are lost to this curable disease. It can be bridged with a solution that requires little manual intervention. This is precisely how Qure’s qXR solution, trained on more than a million chest X-rays, attacks at the heart of the problem. The AI (Artificial Intelligence) encapsulated inside qXR automates reading chest X-rays and generates reports within seconds. Thereby, reducing the waiting time for TB confirmatory tests from weeks to a couple of hours and enrolling confirmed cases to treatment the same day!

qXR features

qXR features

While bacteriological confirmatory tests on presumptive cases are preferred in a screening setting, the cost burden increases. Sputum culture testing will take weeks for the reports that could result in dropouts in collecting reports and treatment enrolment. Additionally, the shortage of sourcing Cartridge Based Nucleic Acid Amplification Test (CB-NAAT) becomes a limitation which results in a delay of the testing process. Qure.ai’s qXR helps in cutting down on time and costs incurred by reducing the number of individuals required to go through these tests. The whole program workflow happens as depicted in the following picture.

Patient flow

Case Study: AccessTB, Philippines

While upscaling our solutions in the last 2 years, it has become evident that Qure.ai can play a vital role in humanity’s war against TB. We deployed qXR with ACCESS TB Project in Philippines in their TB screening program. During the deployment, we learned the operational dynamics of deploying Artificial Intelligence (AI) at health centers.

TB screening process before incorporating qXR

The ACCESS TB program has mobile vans equipped with X-rays machines with trained radiographers and health workers. The program is intended to screen presumptive cases and individuals with a high-risk factor of TB, by running the vans across different cities in the Philippines. Screening camps are either announced in conjunction with a nearby nursing home or health workers identify and invite individuals at risk on the days of programs.

The vans leave the office on Monday morning for remote villages with a predefined schedule. These villages are situated around 100kms from Manila. Two radiology technicians accompany each van. Once they reach the desired health center in the village, they start capturing X-rays for each individual. The X-ray machines are connected to a computer which stores these X-rays locally. One can also edit the dicom (radiology image) information inside the X-ray from this computer.

Individuals arrive inside the van on a first come first serve basis. They are given a receipt containing their patient id, name, etc. Their X-ray is also marked with the same ID using the computer. This approach of mass screening for TB is similar to the approach adopted by the USA during the 1930s to 1960s as depicted in the following picture.

TB screening van

Mass radiography screening campaigns in USA during 1930s to 1960s (Source)

Once all the X-rays have been captured, the vans return to their stay in the same village. They visit a new village/ health center on subsequent weekdays. On Friday evening, all the vans return to Manila. Thereafter, all the X-rays captured in the 4 vans over the week are sent to a radiologist for review. The lead time for the radiologist report is 3 working days and can extend to 2 weeks. The delay in reporting leads to delay in diagnosis and treatment, which can prove to be fatal for the patient and the neighborhood.

Access TB van

Front & side view of AccessTB van with individuals queuing inside the van

Challenges for Qure.ai

Our team arrived in Manila during the second week of September 2018 with the deep learning solution sitting nice and cozy on the cloud. The major challenges in front of us were two-fold:

  1. To ensure smooth upload of images to our cloud server: This was a challenge because some of the villages and towns being visited were really remote and there was no guarantee of sufficient internet connection for the upload to work properly. We had to make sure that everything worked fine even if there was no internet connectivity. To deal with this, we built an application which was installed on their computer to upload images on our cloud. In case of no internet connectivity, it would store all the information and wait for better connectivity. As soon as connectivity became available, the app would start processing deferred uploads.
  2. To enable end to end patient management on one single platform: This was the biggest concern and we designed the software to minimize manual intervention at various stages.

We built a portal where radiology assistants could register patients, radiologists could report on them and patient history could be maintained. The diagnosis from the radiologist, qXR and CB-NAAT tests are all accumulated at a single place.

QXR Portal

Snapshot of complete patient management system

Features that could ease the workflow were added to the software that enabled the staff in the field to filter patients by name, date, site or health center. Such features and provisions in the software helped the staff to capture the progress of screening for a patient with simple sorting and searches.

Implementation process

At Qure, we deliver our products and solutions understanding the customer needs and designing workflows to fit into their existing processes. Especially when it comes to mass screening programs, we understand that each one of them is uniquely designed by program managers & strategists, and requires specific customizations to deliver a seamless experience.

After understanding the existing workflow at AccessTB, we designed our software to include elements that can automate some of the existing processes. Thereafter, the software was built, tested, packaged and stored in a secure cloud.
We figured the best way to integrate with their existing X-ray console and completed the integration on all the vans in 2 working days’ time.

A field visit was arranged after the deployment to assess the software’s performance in areas with limited network connectivity and its ease of usage for the radiology staff. Based on our on-field learnings, we further customized the software’s workflow for the staff.

The implementation process ended with a classroom training program with the field staff, technicians and program managers. With the completion of the deployment, software adaptability assessment and training, we handed over the software to the program in 5 days before we left Manila.

Radiology Assistant Training

Training program for radiology assistants post qXR deployment

Quoting Preetham Srinivas (AI scientist at Qure) on qXR, “With qXR at the heart of it, we’ve developed a solution that is end to end. As in, with individual registrations, and then qXR doing the automated analysis and flagging individuals who need microbiological confirmation. Radiologists can verify in the same portal and then, an organization doing the microbiological tests can plug in their data in the same portal. And once you have a dashboard which collates all this information in one place, it becomes very powerful. The loss itself can be minimized. It becomes that much easier to track the person and make sure he is receiving the treatment.”

Conclusion

WHO has given the status of an epidemic to TB. They adopted an END TB strategy in 2014 aimed at reducing TB deaths by 90% and cutting new cases by 80% between 2015 and 2030. Ending TB by 2030 is one of the health targets of their Sustainable Development Goals.

The scale of this epidemic cries out for technology to intervene. Technologies like AI, if incorporated into existing TB care ecosystem, can not only assist healthcare practitioners massively, but also enrich it by the supplied data and feedback. And this is not a mere speculation. With qXR, we are having a first-hand experience of how AI can accelerate our efforts in eradicating TB. Jerome Trinona, account coordinator for AccessTB project, says “Qure.ai’s complete TB software is very helpful in maximizing our time – now we can keep track of the entire patient workflow in one place.”

Access TB success

Successful deployment of qXR with AccessTB Program staff

Successful deployments like AccessTB show that Qure.ai is leading the battle against TB at the technology and innovation fronts. Post World TB day, let us all embrace AI as our newest ammunition against TB.

Let’s join hands to end TB by 2030. 1

  1. Reach out to us at partner@qure.ai 

Categories
Recommended

Interview with Dr Bhavin Jankharia – Radiologist Perspective on AI

Dr Bhavin Jankharia is one of India’s leading radiologists, former president of the IRIA as well as a renowned educator, speaker and writer. Dr Jankharia’s radiology and imaging practice, “Picture This by Jankharia”, is known for being an early adopter of innovation and for pioneering new technologies in radiology.

Q&A with Dr Jankharia on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

AI is here to stay and will be a major factor to shape radiology over the next 10 years. It will be incorporated in some form or the other in protocols and workflows across the spectrum of radiology work and across the globe.

Photo of Dr Bhavin Jankharia with quote

You have been an early adopter of AI in your practice. What would your advice be to other institutions globally who are considering incorporating AI into their workflow?

It is about questions that need to be answered. At present, AI is good at solving specific questions or given numerical data from CT scans of the abdomen and pelvis with respect to bone density and aortic size, etc. Wherever there is a need for such issues to be addresses, AI should be incorporated into those specific workflows. We still haven’t gotten to the stage where AI can report every detail in every scan and that may actually never happen.

It may never happen that AI can do what a radiologist does, but looking at the near team (say next 3-5 years), what do you think AI can achieve? (For example, what tasks can it automate? Can it improve reporting accuracy? ) Where will the biggest value addition be?

Its basic value addition will be to take away drudge work. Automated measurements, automated checking of densities, enhancement patterns, perhaps even automated follow-ups for measurements of abnormal areas already marked out on the first scans and the like.

Now that you have experienced AI in practice, how would you differentiate this technology from traditional CAD solutions that have been around for a while?

AI learns much faster and the basic approach is different. To the end user though, it matters not, does it, how we get the answer we want…

You have seen several AI companies in Radiology. What should they be doing differently to reach this goal?

At present, all of AI is problem-solving based. And since each company deals with different problems based on the doctors they work with, this approach is fine. The company that figures out a way to handle a non-problem based approach to basic interpretation of scans, the way radiologists do, will have a head-start.

How do you think the Qure.ai solutions can help or are helping radiologists in their practice?

They are slowly saving time and helping radiologists work smarter and better.

What is your advice to young radiologists who are just getting started on their career? How should they think about adopting AI in their practice and should they be doing anything differently to succeed as a radiologist 10-20 years from now?

I don’t think radiologists per se have to do anything about AI, unless they want to change track and work in the field of AI from a technical perspective. AI incorporation into workflow will happen anyway and like all changes to radiology workflow over the decades, it will become routine and a way of life. They don’t really need to do anything different, except be willing to accept change.

Categories
Recommended

Machine Learning and Perfusion in Neuroimaging

Knowing the state of the cells and amount of blood flow in brain, can reveal huge amount of information about it. Tumor cells have high consumption of blood metabolites (like glucose) than normal. This information can contribute significantly towards diagnosis of the brain. Perfusion Imaging allows us to measure all of these quantities. Combine this with Machine Learning, and we can build a system that takes perfusion images and gives out a complete report about the brain. This is the future.

Brain tumors are the leading cause of cancer-related deaths in children (males and females) age 1-19[1]. It is estimated that as many as 5.1 million Americans may have Alzheimer’s disease[2]. And there are many such degenerative diseases such as Parkinson’s, Autism, Schizophrenia that affect thousands and millions of people amongst us. Neuroimaging or brain imaging is the use of various techniques to either directly or indirectly image the structure, function/pharmacology of the nervous system[3]. One of the techniques of neuroimaging is Perfusion Imaging.

What is Perfusion Imaging ?

Perfusion imaging captures the qualitative and quantitative information of the blood flow and various kinetics related to blood flow in your brain. Technically, Perfusion is defined as the passage of fluid through the lymphatic system or blood vessels to an organ or a tissue[4]. If you know the blood flow of affected and the normal brain, it can be helpful in finding the abnormalities.
Perfusion imaging helps in measurement of various parameters such as

  • cerebral blood volume : volume of blood flowing in brain
  • cerebral blood flow : rate of flow of blood in brain
  • volume transfer coefficient : the permeability of small blood vessels to molecules like glucose, measured as Ktrans

Perfusion Imaging for Brain Tumors

For Brain Tumors, the World Health Organization (WHO), has developed a histological classification system that focuses on the tumor’s biological behavior. The system classifies tumors into grades I to IV. Grade IV are the most malignant primary brain tumors. Histopathological analysis or analysing a biopsy of brain serves as a final test to decide the grade. This is an invasive procedure and requires the availability of an expert for the analysis.

rCBV values in Deformed Brain

Glioblastoma, a form of Brain Tumor, exhibits increased rCBV values. Source :
Neurooncology – Newer Developments

Recent papers[5-7] have found strong correlation between perfusion parameters such as relative cerebral blood volume (abbr. rCBV), volume transfer coefficient (abbr. kTrans) and grade of the tumor. Higher perfusion values in marked RoIs (regions of interest) suggested higher grades. Taking a step further another paper[8] also suggested use of perfusion to measure prognosis and thus it can be a great indicator to quickly measure the effects of treatment or medication the subject is undergoing.

Perfusion Imaging for Alzheimer’s

Alzheimer’s is the cause of 60%-70% cases of dementia[9] and it has no cure. Globally, dementia affects 47.5 million people[9]. About 3% of people between the ages of 65–74 have dementia, 19% between 75 and 84 and nearly half of those over 85 years of age[10].

Images of a SPECT scan

The normal brain (on the left) shows normal blood perfusion, denoted by an abundance of yellow color. The scan on the right, of a person suffering from Alzheimer’s, shows pervasive low perfusion all around, denoted by blues and greens. Source :
The Physiological and Neurological basis of Cerebra TurboBrain

A 2001 paper[11] in American Journal of Neuroradiology showed that perfusion or rCBV values in particular can be used to replace nuclear medical imaging techniques for the evaluation of patients with Alzheimer’s disease. Another paper[12] published in 2014 suggests closely linked mechanisms of neurodegeneration mediating the evolution of dementia in both Alzheimers and Parkinsons. Many other papers[13,14] suggest strong linkage between early Alzheimer’s and cerebral blood flow and thus can help in detection at earlier stage.

Perfusion Imaging and Machine Learning

A lot of work in last decade has also been done to try to develop autonomous/semi-autonomous process of decision making in solving various problems mentioned above. Some papers[15-17] have shown promise in developing semi-autonomous systems using Support Vector Machines (SVM) and other ML techniques for brain tumor grade classification with accuracies as high as 80%. In the domain of neurodegenerative diseases, accuracies as high as 85% have been achieved in classification of MRIs[18,19] using perfusion and ML, and a recent article[20] suggested that Alzheimer’s early detection might be possible using AI which could predict onset of Alzheimer’s with accuracy of 85% to 90%.

Problems with Perfusion Imaging

Even though the perfusion imaging looks promising, but there are some major hurdles due to which it has not yet spread into the hospitals as go-to method for analysis. Standardisation is the biggest problem that needs to be tackled in the first place. This paper[21] highlights various methods used in brain perfusion imaging. There aren’t one or two different methods, but seven that are highlighted in the paper. Another paper[22] published in Journal of Magnetic Resonance Imaging (JMRI) gives a deeper insight into two successful approaches being used. Measurements from different methods have different accuracies, and asks for different expertise from the doctors performing.

Before perfusion moves from research based imaging to more mainstream technique, a question of standardisation have to be answered.
Also, inclusion of any major change into industry as big as healthcare requires time. However at a small scale, perfusion imaging has been showing many signs of being a forefront technology. This can be used alongside current advances in ML to do automated diagnosis and prognosis of various brain related diseases and disorders.

With inputs from Dr. Vasantha Kumar

References

  1. American Brain Tumor Association
  2. Alzheimer’s foundation of America
  3. Neuroimaging – Wikipedia
  4. American Psychological Association
  5. Law, Meng, et al. “Glioma Grading: Sensitivity, Specificity, and Predictive Values of Perfusion MR Imaging and Proton MR Spectroscopic Imaging Compared with Conventional MR Imaging” American Journal of Neuroradiology 24.10 (2003): 1989-1998.
  6. Shin, Ji Hoon, et al. “Using Relative Cerebral Blood Flow and Volume to Evaluate the Histopathologic Grade of Cerebral Gliomas: Preliminary Results” American Journal of Roentgenology 179.3 (2002): 783-789.
  7. Patankar, Tufail F., et al. “Is Volume Transfer Coefficient (Ktrans) Related to Histologic Grade in Human Gliomas?” American journal of neuroradiology 26.10 (2005): 2455-2465.
  8. Mills, Samantha J., et al. “Do Cerebral Blood Volume and Contrast Transfer Coefficient Predict Prognosis in Human Glioma?” American Journal of Neuroradiology 27.4 (2006): 853-858.
  9. World Health Organization. “Dementia Fact sheet N°362” (2012).
  10. Umphred, Darcy Ann, et al., eds. Neurological rehabilitation. Elsevier Health Sciences, 2013.
  11. Bozzao, Alessandro, et al. “Diffusion and Perfusion MR Imaging in Cases of Alzheimer’s Disease: Correlations with Cortical Atrophy and Lesion Load” American Journal of Neuroradiology 22.6 (2001): 1030-1036.
  12. Le Heron, Campbell J., et al. “Comparing cerebral perfusion in Alzheimer’s disease and Parkinson’s disease dementia: an ASL-MRI study” Journal of Cerebral Blood Flow & Metabolism 34.6 (2014): 964-970.
  13. Roher, Alex E., et al. “Cerebral blood flow in Alzheimer’s disease” Vasc Health Risk Manag 8 (2012): 599-611.
  14. MRI technique detects evidence of cognitive decline before symptoms appear
  15. Zacharaki, Evangelia I., et al. “Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme” Magnetic Resonance in Medicine 62.6 (2009): 1609-1618.
  16. Zacharaki, Evangelia I., Vasileios G. Kanas, and Christos Davatzikos. “Investigating machine learning techniques for MRI-based classification of brain neoplasms” International journal of computer assisted radiology and surgery 6.6 (2011): 821-828.
  17. Emblem, Kyrre E., et al. “Predictive modeling in glioma grading from MR perfusion images using support vector machines.” Magnetic resonance in medicine 60.4 (2008): 945-952.
  18. Fung, Glenn, and Jonathan Stoeckel. “SVM feature selection for classification of SPECT images of Alzheimer’s disease using spatial information” Knowledge and Information Systems 11.2 (2007): 243-258.
  19. López, M. M., et al. “SVM-based CAD system for early detection of the Alzheimer’s disease using kernel PCA and LDA.” Neuroscience Letters 464.3 (2009): 233-238.
  20. Artificial Intelligence Could Aid Earlier Diagnosis Of Alzheimer’s
  21. Wintermark, Max, et al. “Comparative Overview of Brain Perfusion Imaging Techniques” Stroke 36.9 (2005): e83-e99.
  22. Barbier, Emmanuel L., Laurent Lamalle, and Michel Décorps. “Methodology of brain perfusion imaging” Journal of Magnetic Resonance Imaging 13.4 (2001): 496-520.