Categories
In Focus

From Surviving to Thriving: The Journey of a Childhood TB Survivor Turned Doctor

Introduction

On the eve of World Tuberculosis (TB) Day, I, Dr. Shibu Vijayan, want to share my story with you. A story of hope, resilience, and determination that has shaped my life and inspired me to dedicate my career to the fight against TB. As a childhood TB survivor, public health specialist, and now Medical Director at Qure.ai, my journey has taken me from the depths of illness to the forefront of digital and AI advances in TB control.

The Battle Begins

When I was 12 years old, my world was turned upside down. What started as hip pain, mild fever, and fatigue quickly escalated into a battle for my life with a disabled leg. I was diagnosed with tuberculosis, a disease that would keep me bedridden for six long months.

My world revolved around a bed, and the only view of the external world was through the window, with occasional parrots coming to eat the fruits from the creepers.

During this time, I endured 18 months of TB treatment, which included three months of painful injections and the constant struggle of nausea and vomiting. My adolescence was very restrictive, getting many disapprovals for playing out and participating in sports. My request to play games was repeatedly turned down, and I was reminded that I was a sick boy. But through it all, I never lost hope.

Finding Purpose in the Fight

My experience with TB became the catalyst for my future career. I resolved that once I recovered, I would become a doctor and dedicate myself to TB control. I wanted to ensure that no other child would have to go through what I did. So, true to my word, I went on to become the district TB officer and served as a doctor in the very same facility where I received my treatment.

District TB Office, Kollam

Embracing Digital and AI Advances

Now, as the Medical Director at Qure.ai, I have the opportunity to integrate cutting-edge digital and AI technology into the battle against TB. AI-powered tools for advanced diagnostic imaging and data analysis are revolutionizing how we detect and treat TB. By implementing these innovative solutions, we can reduce the time it takes to diagnose TB and improve the accuracy of detection. Not only does it save lives, but it also minimizes the suffering that comes with delayed diagnosis and treatment.

The Urgency of Early Diagnosis

In my 25 years of working in TB elimination, I have come to understand the critical importance of early diagnosis. Catching TB at its earliest stages increases the likelihood of successful treatment and helps prevent the spread of thedisease to others. Early diagnosis and intervention are essential in our mission to end the global TB epidemic.

District TB Office, Kollam

A Call to Action

As we observe World TB Day on March 24th, let us remember the millions of lives impacted by this devastating disease. My story is just one of many, and it is a testament to the resilience of the human spirit in the face of adversity. Together, we can harness the power of digital and AI advances to bring us closer to a world free of TB. Let us all commit to joining the fight against TB and ensuring that everyone, everywhere, has access to early diagnosis and life-saving treatment.

Conclusion

From surviving to thriving, my journey as a childhood TB survivor turned doctor is a testament to the power of hope and determination. As we continue to make strides in TB control and elimination, I remain dedicated to this cause, ensuring that future generations will not have to endure the pain and suffering I experienced.

So, this World TB Day, let us stand together and reaffirm our commitment to end TB once and for all.

Categories
Uncategorized

Can We Upskill Radiographers through Artificial Intelligence?

Shamie Kumar describes how AI fits into a radiology clinical workflow and her perspective on how clinical radiographers could use this to learn from and enhance their skills.

AI in radiology and workflow 

We all know that AI is already here, actively being implemented and used in many trusts in seeing its real world value supporting radiology departments to solve current challenges.Often this is focused on benefits to radiologist, clinicians, reporting radiographers, patients, and cost savings, but what about clinical non-reporting radiographers undertaking the X-ray or scans – can AI benefit them too?Let’s think about how AI is implemented and where are the AI outputs displayed?

If the AI findings are seen in PACS, how many radiographers actually log into PACS after taking a scan or X-ray? Good practice is seen to have PACS open to cross-check images that have been sent from the modality. Often this doesn’t happen for various reasons but maybe it should be a part of the radiographers’ routine practice, just like post-documentation is.

Can Radiographers Up-Skill?

Given the view it does happen, radiographers will have the opportunity to look at the AI outputs and potentially take away learnings on whether the AI found something that they didn’t see initially or whether there was a very subtle finding. We all know people learn through experience, exposure, and repetition, so if the AI is consistently picking up true findings, then the radiographer can learn from it too.

But what about when AI is incorrect – could it fool a radiographer, or will it empower them to research and understand the error in more detail?

As with many things in life, nothing is 100% and this includes AI in terms of false positive and false negatives. The radiographers have the opportunity to research erroneous findings in more detail to enhance their learning, but do they actually have time to undertake additional learning and steps to interpret AI?

CPD, self-reflection, learning through clinical practice are all key aspects of maintaining your registration, and self-motivation is often key to furthering yourself and your career. The question remains: are radiographers engaged and self-motivated to be part of the AI revolution and use it to their professional benefit with potential learnings at their fingertips?

There have been a few recent publications that share insight on how AI is perceived by radiographers, what is their understanding, training and educational needs.

Many Universities like City University London and AI companies like Qure.ai are taking the initial steps in understanding this better and taking active efforts in filling the knowledge gap, training and understanding of AI in radiology.

Radiographers who are key part of any radiology pathway, are yet to see the real-world evidence on whether AI can upskill radiographers, but there is no doubt this will unfold with time.

About Shamie Kumar

Shamie Kumar is a practicing HCPC Diagnostic Radiographer; graduated from City University London with a BSc Honors in Diagnostic Radiography in 2009 and is a part of Society of Radiographers with over 12 years of clinical knowledge and skills within all aspects of radiography. She studied further in leadership, management, and counselling with a keen interest in artificial intelligence in radiology.

References

Akudjedu, T. K. K. N. M., 2022. Knowledge, perceptions, and expectations of Artificial intelligence in radiography practice: A global radiography workforce survey. Journal of Medical Imaging and Radiation Sciences.Coakley, Y. M. E. C. M. M., 2022. Radiographers’ knowledge, attitudes and expectations of artificial intelligence in medical imaging. Radiography International Journal of Diagnostic Imaging and Radiation Therapy, 28(4), pp. P943-948.

Malamateniou, K. P. W. H., 2021. Artificial intelligence in radiography: Where are we now and what does the future hold?. Radiography International Journal of Diagnostic Imaging and Radiation Therapy, 27(1), pp. 58-62.

Kumar, D., 2022. CoR endorsed CPD Super User Training by Qure.ai. [Online]
Available at: https://www.qure.ai/gain-cor-endorsed-super-user-training/

 

Categories
Uncategorized

Is Artificial Intelligence a glorified red dot system?

Shamie Kumar describes her perspective on how radiography has evolved over time, the impact radiographers can have in detecting abnormal X-rays and reflects how she views fast approaching AI in advancing current skills.

 The red dot system 

Often one of the first courses a newly qualified radiographer attends is the red dot course. This course demonstrates pathologies and abnormalities often seen in x-rays some obvious, others not, giving radiographers the confidence to alert the referring clinician and/or radiologist that there is something abnormal they have seen. 

The red dot system is a human alert system, often 2 pairs of eyes are better than one and assist with near misses. How this is done in practice can vary between hospitals, in the era of films the radiographer would place a red dot sticker on the film itself before returning it to clinician or radiologist. In the world of digital imaging this is often done during ‘post documentation’ a term used once the x-ray is finished, the radiographer will complete the rest of the patient documentation to suggest the x-ray is complete, ready to be viewed and reported. As part of this process the radiographer can change the status of the patient to urgent along with a note for what has been observed. From this the radiologist knows the radiographer has seen something urgent on the image and the patient appears at the top of their worklist for reporting and, so the radiologist can view the radiographer’s notes. 

 The Role of AI in Radiology 

Artificial Intelligence (AI) is moving at a pace within healthcare and fast approaching radiology departments, with algorithms showing significant image recognition in detecting, characterisation and monitoring of various diseases within radiology. AI excels in automatically recognising complex patterns in imaging data providing quantitative assessments of radiological characteristics. With the numbers for diagnostic imaging requests forever increasing, many AI companies are focusing on how to ease this burden and supporting healthcare professionals. 

AI triage is done by the algorithm based on abnormal and normal finding’s this is used to create an alert for the referring clinician/radiologist. It can be customised to the radiologist, for example colour-coded flags, red for abnormal, green for normal, patients with a red flag would appear at the top of the radiologist worklist. For the referring clinicians who don’t have access to the reporting worklist, the triage would be viewed on the image itself with an additional text note suggesting abnormal or normal. 

What does AI do that a radiographer doesn’t already? AI is structured in the way it gives the findings for example a pre-populated report with its findings or an impression summary and its consistent without reader variability. So, the question now becomes what AI can do beyond the red dot system, here the explanation is straightforward, often a radiographer wouldn’t go to the extent of trying to name what they have seen, especially in more complex x-rays like the chest where there are multiple structures and pathologies. For example a radiographer would mention, right lower lobe and may not go beyond this, often due to confidence and level of experience. 

AI can fill this gap, it can empower radiographers and other healthcare professionals with its classification of pathologies identifying exactly what has been identified on the image, based on research and training of billions of data sets with high accuracy. 

The radiographers may have the upper hand with reading the clinical indication on the request form and seeing the patient physically, which undoubtable is of significant value, however the red dot system has many variables specific to that individual radiographer’s skills and understanding. It is also limited to giving details of what they have noted to just the radiologist, what about the referring clinician who doesn’t have access to the radiology information system (RIS) where the alert and notes are? Do some radiographers add a text note on the x-ray itself? 

Summary 

Yes, AI is a technological advancement of the red dot system and will continue to evolve. It is structured in how it gives the findings, it does this consistently with confidence. Adding value to early intervention, accurate patient diagnosis, contributing to reducing misdiagnosis and near misses. AI is empowering radiographers, radiologist, referring clinicians and junior doctors by enhancing and leveraging their current knowledge to a level where there is consistent alerts and classified findings that can even be learned from. This doesn’t replace the red dot system but indeed enhances it. 

The unique value a radiographer adds to the patient care, experience and physical interaction can easily be supplemented with AI, allowing them to alert with confidence and manage patients, focusing the clinician time more effectively. 

About Shamie Kumar 

Shamie Kumar is a practicing HCPC Diagnostic Radiographer; graduating from City University London, BSc Honors in Diagnostic Radiography in 2009 and part of Society of Radiographers with over 10 years of clinical knowledge and skills within all aspects of radiography. She studied further in leadership, management and counselling with a keen interest in artificial intelligence in radiology. 

Categories
Uncategorized

The Role of AI in Heart Failure Early Detection

Heart failure affects 6.2 million Americans each year, costing the US healthcare system $30.7 billion. Heart failure occurs when the heart cannot pump enough blood to meet the body’s needs. Early detection is critical in the treatment and management of heart failure. The use of AI in detecting heart failure on chest X-rays has the potential to improve the accuracy and speed of diagnoses significantly.

Heart failure is a severe and potentially life-threatening condition affecting millions worldwide. Heart failure is a serious and growing health concern in the United States, affecting 6.2 million Americans yearly. It is the leading cause of hospitalization in those over 65 years of age, contributing to the staggering $30.7 billion in estimated spending each year by the US healthcare system on heart failure alone. Hospitalization accounts for most of these costs, which are expected to increase to at least $70 billion annually by 2030.  

 This condition occurs when the heart cannot pump enough blood to meet the body's needs, leading to shortness of breath, fatigue, and swelling. Despite advances in medical technology and treatments, heart failure remains one of the country’s leading causes of death and hospitalization. 

AI to the resQue

Leveraging recent advances in medical technology, early detection, and faster time-to-treatment make increased survivability possible. In addition, by identifying and effectively managing risk factors such as high blood pressure and diabetes, healthcare professionals, patients, policymakers, and technology innovators can work together to help reduce the impact of this debilitating condition and improve the lives of those affected by heart failure.

Output generated by Chest X-ray AI Solution

Enlargement of heart in cases of heart failure

Early Detection

Early detection is critical in managing this condition, as the sooner it is diagnosed, the better the chances of recovery. Chest X-rays have long been used as a diagnostic tool in detecting heart failure, but this process has become much more precise and efficient with the advent of artificial intelligence (AI). 

 Qure's qXR for Heart Failure

Qure.ai’s Artificial Intelligence algorithm, qXR-HF, helps in the early detection of heart failure on chest X-rays by analyzing and interpreting abnormalities on medical imaging outputs. AI algorithms can identify patterns and features in X-rays that may indicate heart failures, such as an enlarged heart, abnormal cardiothoracic ratio, or fluid buildup (Pleural effusion) . These algorithms can quickly process images in less than 60 seconds, allowing for early and efficient diagnoses. Additionally, qXR-HF can help reduce human error and improve accuracy in detection. This is particularly important in the case of heart failure, as early detection can greatly improve the chances of successful treatment and recovery. 

 A significant advantage of using AI in detecting heart failure is improved accuracy. In addition, AI algorithms are less prone to human error and can help reduce misdiagnosis risk, leading to delayed treatment and potentially serious medico-legal consequences. 

 The use of AI in detecting heart failure on chest X-rays has the potential to greatly improve the accuracy and speed of diagnoses. By leveraging the power of AI algorithms, technology can help healthcare professionals make more informed decisions and provide patients with the best possible care. As the field of AI continues to evolve and improve, we will likely see even more advanced applications in the diagnosis and treatment of heart failure and other conditions. 

Categories
Uncategorized

Prospective Observational Study at Frimley Health NHS Foundation Trust

Introduction

The increase in complexities of diseases has led to radiologists reporting increasing numbers of different imaging modalities, as well as undertaking specialist clinics, ultrasound lists, and interventional procedures which are highly complex. The increasing reporting workload has not seen the correlating increase in number of Radiologists to ensure timely and accurate reporting of all the imaging modalities. Latest guidance indicates that the NHS radiologist workforce is now short-staffed by 33% and by 2025 the UK’s radiologist shortfall will reach 44% (RCR, 2021).

AI technology has the potential to integrate into the clinical pathway and help Radiologists with the ever-increasing backlog of reporting.

Frimley Health NHS Foundation Trust

Frimley Health NHS Foundation Trust consist of 3 hospitals, Wexham Park, Heatherwood and Frimley Park and serves up to 1.3 million residents, and is a well performing trust in radiology turnaround time, to stay on top of timely chest radiograph reporting. Frimley Health has invested itself to adopt AI solutions to assist the Trust to improve workflow efficiency and support clinicians, and ultimately patients. Dr Amrita Kumar, Consultant Radiologist and AI Lead for Frimley Health will be leading a 6-month pilot with Qure.ai in using qXR to support the timely reporting of chest radiographs in the GP and outpatient setting.

6 Month Service Evaluation using qXR 

Chest X-ray is often first line imaging for symptoms relating to lung cancer, due to X-rays being readily available, low cost, fast acquisition time and supports initial diagnosis prior to further imaging.

This evaluation will test the accuracy of qXR in classifying an unremarkable chest X-ray from one with findings in a clinical setting.  The Qure’s PACS viewer application will be actively used for the first time in the UK in the initial phase, to ensure the readers are blinded to the AI results. The outcomes will be assessed to demonstrate the capability of qXR in identifying unremarkable scans with a high negative predictive value.

Phase 2 will consist of qXR integration with the hospital system information system EPIC, which will allow a seamless experience to all users. AI findings will be viewed alongside the original X-ray, data will be collected throughout the study in understanding the value of AI in reducing report turnaround time and workflow efficiency.

“I think AI has a great potential to help Radiology departments maintain their service levels with increasing workloads, allowing Consultant Radiologists to focus on more complex patient-facing cases." – Dr. Amrita Kumar  

Consultant Radiologist and AI Clinical Lead, Frimley Health NHS Foundation Trust

Categories
In Focus Uncategorized

Burning Issue: Why Opportunistic Screening for Lung Cancer is the need of the hour

'Cancer Cures Smoking'

Did the above line make you look twice and think thrice? Years ago, the Cancer Patients Aid Association published this thought-provoking message, a genuinely fresh view on the relationship between tobacco and cancer. And why not?

Extensive research from across the world indicates that cigarette smoking can explain almost 90% of lung cancer risk in men and 70 to 80% in women. The WHO lists tobacco use as the first risk factor for cancer. The World Cancer Research Fund International goes a step further and plainly calls out smoking. With lung cancer racking up 2.21 million cases in 2021 and 1.8 million deaths, one can understand why healthcare stakeholders want to focus efforts on targeting common causes and reducing incidents of the disease.

Yet, a recent study indicates troubling trends.

Medanta Hospital is one of India’s leading medical facilities. Their research on lung cancer prevalence, conducted over a decade between 2012 – 2022 amongst 304 patients threw up a startling statistic – 50% of their lung cancer patient cohort were non-smokers. According to the doctors who conducted the research, Dr Arvind Kumar, Dr. Belal Bin Asaf and Dr. Harsh Puri, this was a sharp rise from earlier figures for non-smoking lung cancer patients (10-20%). But, there’s more.

The study indicates that, be it smokers or non-smokers, the risk group for lung cancer has expanded to a relatively more youthful population.

The WHO previously flagged a key factor for the rising trend in young, non-smokers being at risk for lung diseases – air pollution. Dr. Tedros Adhanom Ghebreyesus called air pollution a ‘silent public health emergency’ and ‘the new tobacco’. It presents clinicians working to treat and prevent lung cancer with a new conundrum – evaluating risk factors for the disease.

Simply put, how does one tackle the risk of lung cancer in a 25-year-old, non-smoking individual living a reasonably healthy lifestyle when a risk factor could be the simple act of breathing?

According to Dr. Matthew Lungren, the answer could be Opportunistic Screening – which he calls, “… the BEST use case for AI in radiology”

Qure.ai concurs. qXR, our artificial intelligence (AI) solution for chest X-rays, has been tried, tested and trusted to assist in identifying and reporting missing nodules, which highlights the importance of opportunistic screening for identifying potential lung cancers early.

All our recent studies, including the one with Massachusetts General Hospital (MGH) in a retrospective multi-center study, investigated and concluded that Qure’s CE approved qXR could identify critical findings on Chest X-Rays, including malignant nodules.  This spurs the possibility that opportunistic screening for indicators of lung cancer and other pulmonary diseases should become the norm.

Qure.ai’s solutions, can truly make the difference, augmenting the efforts of clinicians and radiologists any and every time a Chest X-ray or Chest CT is conducted.

November is Lung Cancer Awareness Month. What better moment than the last day of the month to urge everyone to think outside the box when it comes to demographics, risk factors, screening, and the role of AI in healthcare.

Categories
Uncategorized

Taking No Chances: Opportunistic Screening’s Role in Early Lung Cancer Detection

Key Highlights

  • Over 20M Chest CTs are performed every year in the USA alone  
  • Every chest CT scan is a potential lung cancer screening opportunity 
  • Chest CT scanning increased significantly during the pandemic 
  • Qure.ai conducted a deep-learning study to use CT scans for COVID to screen for actionable nodules

Introduction

Jackson Brown, Jr. once said that nothing is more expensive than a missed opportunity. Lung cancer is perhaps the ideal example of this because incidental/early detection via opportunistic screening can play a significant role in helping to successfully combat the malady. 

Lung cancer accounts for 1 in 5 cancer deaths yearly; the leading cause of cancer-related deaths worldwide. It accounts for the greatest economic and public health burden of all cancers annually; approximately $180 billion. This is also because the prognosis for lung cancer is poor compared to other cancers, largely due to a high proportion of cases being detected at an advanced stage  where treatment options are limited, and the 5-year survival rate is only 5-15%.The global pandemic strained healthcare systems worldwide also leading to significant increase in the chest CT volumes.  

“Earlier we would conduct approximately 300 chest CT scans per month. During the pandemic, this number rose to 7000 per month. It put a severe strain on doctors who must review every scan. Qure’s AI solution, qCT, made a significant difference to us by flagging missed actionable nodules on chest CT scans for further follow-ups & investigations.”
– Arpit Kothari, CEO, bodyScans.in

The large volume of scans during the pandemic allowed Qure.ai to conduct a study using a deep-learning approach towards opportunistic screening for actionable lung nodules.

Methodology

The study uses Qure.ai’s deep-learning approach to identify lung nodules on CT scans from patients who were scanned for COVID-19 from 5 radiology centers across different cities in India.  

The scans were sourced from bodyScans.in, a leading radiology service provider in Central India and Aarthi Scans & Labs, yet another major diagnostic provider with 40 full-fledged diagnostic centers across India.

2502 scans were randomly selected from Chest CTs performed at 5 sites in two specialist radiology chains, Aarthi Scans and bodyScans during India’s 2nd and 3rd wave of Covid. They were processed by qCT, Qure’s AI capable of detecting and characterizing lung nodules. The radiologist report of the cases flagged by qCT were investigated for findings suggestive of cancer. Flagged cases for which the nodule was not reported were re-read by an independent radiologist with AI assistance on a web portal. They were asked to either confirm or reject the flag, rate the nodule for malignancy potential if confirmed or provide alternate finding if rejected (See Figure). 

Results

  • 2502 CT scans were processed in total.  
  • Of these, 23.7% were flagged by qCT and re-read by an independent thoracic radiologist.  
  • In 19.4% of these flagged cases, the radiologist agreed that there were unreported actionable nodules.  
  • There were 19 cases where radiologists did not rule out the risk of malignancy and 2 out of these were rated as probably malignant.  

Conclusion

In the study, Qure.ai’s AI tool has assisted in reporting missed nodules which highlights the importance of opportunistic screening for identifying potential lung cancers early.  The need to improve efficiency and speed of clinical care continues to drive multiple innovations into practice, including AI. With the increasing demand for superior health care services and the large volumes of data generated daily from parallel streams, streamlining of clinical workflows has become a pressing issue. In our study, by using AI as a safety net, we found 21 chest CTs that should have warranted follow-up management for the patients. 

“Early detection plays a critical role in successfully treating Lung Cancer. Yet, there are several factors which contribute to the significant risk of these nodules getting missed in chest CT scans. Qure’s AI solution, qCT is immensely useful because it acts as a safety net; another pair of eyes to ensure that we clinicians can identify those patients who need immediate help. Eventually, AI can augment our efforts to defeat the disease.”
– Dr. Arunkumar Govindarajan, Director, Aarthi Scans & Labs

Categories
Uncategorized

AI-Based Gaze Deviation Detection to Aid LVO Diagnosis in NCCT

Introduction

Strokes occur when blood supply to the brain is interrupted or reduced, depriving brain tissue of oxygen and nutrients. It is estimated that a patient can lose 1.9 million neurons each minute when a stroke is untreated. So, the treatment of stroke is a medical emergency that requires early intervention to minimize brain damage and complications. Furthermore, a stroke caused by emergent large vessel occlusion (LVO) requires a much more prompt identification to improve clinical outcomes.

Neuro interventionalists need to activate their operating rooms to prepare candidates identified for endovascular therapy (EVT) as soon as possible. As a result, identifying imaging findings on non-contrast computed tomography (NCCT) that are predictive of LVO would aid in identifying potential EVT candidates. We present and validate gaze deviation as an indicator to detect LVO using NCCT. In addition, we offer an Artificial Intelligence (AI) algorithm to detect this indicator.

What is LVO?

Large vessel occlusion (LVO) stroke is caused by a blockage in one of the following brain vessels:

  1. Internal Carotid Artery (ICA) 
  2. ICA terminus (T-lesion; T occlusion) 
  3. Middle Cerebral Artery (MCA) 
  4. M1 MCA 
  5. Vertebral Artery 
  6. Basilar Artery

Image source: Science direct

LVO strokes are considered one of the more severe kinds of strokes, accounting for approximately 24% to 46% of acute ischemic strokes. For this reason, acute LVO stroke patients often need to be treated at comprehensive centers that are equipped to handle LVOs. 

Endovascular Treatment (EVT)

EVT is a treatment given to patients with acute ischemic stroke. Using this treatment, clots in large vessels are removed, helping deliver better outcomes. EVT evaluation needs to be done at the earliest for the patients that meet the criteria and are eligible. Early access to EVT increases better outcomes for patients.  The timeframe to perform is usually between 16 – 24 hours in most acute ischemic cases.

Image Source: PennMedicine

Goal for EVT

Since it is important to perform this procedure as early as possible, how do we get there?

LVO detection on NCCT

There is a 3 point step to consider for this:

  1. Absence of blood
  2. Hyperdense vessel sign or dot sign
  3. Gaze deviation (often overlooked on NCCT) 

Gaze deviation and its relationship with acute stroke

Several studies suggest that gaze deviation is largely associated with the presence of LVO [1,2,3].

Stroke patients with eye deviation on admission CT have higher rates of disability/death and hemorrhagic transformation. Consistent assessment and documentation of radiological eye deviation on acute stroke CT scan may help with prognostication [4].

AI algorithm to identify gaze deviation

We developed an AI algorithm that reports the presence of gaze deviation given an NCCT scan. Such AI algorithms have tremendous potential to aid in this triage process. The AI algorithm was trained using a set of scans to identify gaze direction and midline of the brain. The gaze deviation is calculated by measuring the angle between the gaze direction and the midline of the brain. We used this AI algorithm to identify clinical symptoms of ipsiversive gaze deviation in stroke patients with LVO treated with EVT. The AI algorithm has a sensitivity and specificity of 80.8% and 80.1% to detect LVO using gaze deviation as the sole indicator. The test set had 150 scans with LVO-positive cases where thrombectomy was performed.

Discussion

Ipsiversive Gaze deviation on NCCT is a good predictor of LVO due to proximal vessel occlusions in ICA terminus and M1 occlusions. However, it is a poor predictor of LVO due to M2 occlusion. We report an AI algorithm that can identify this clinical sign on NCCT. These findings can aid in the triage of LVO patients and expedite the identification of EVT candidates. 

We are presenting this AI method at SNIS 2022, Toronto. Please attend our oral presentation on 28th July 2022 at 12:15 PM (Toronto time).

 

Upadhyay, Ujjwal & Golla, Satish & Kumar, Shubham & Szweda, Kamila & Shahripour, Reza & Tarpley, Jason. (2022). Society of NeuroInterventional Surgery SNIS

Categories
In Focus Uncategorized

Need for Speed: AI, AstraZeneca, and early lung cancer diagnosis

The AstraZeneca-Qure partnership

A thousand miles begins with a single step. In 2020, Qure.ai and AstraZeneca took the first step together to integrate advanced artificial intelligence (AI) solutions to identify lung diseases early in patients across AstraZeneca’s Emerging Markets region – Latin America, Asia, Africa, and the Middle East. In the past 2 years, the partnership has made significant progress, incorporating the use of AI technology with chest X-rays for multi-disease screening, including tuberculosis and heart failure along with lung cancer.

Lung Cancer: The need for early detection

In more than 40% of people suffering from Lung Cancer, it is detected at Stage 4, when their likelihood of surviving 5 years is under 10%. Only 20% are diagnosed at Stage 1 when the survival rate is between 68-92%. That’s why Lung Cancer is responsible for every 1 in 5 cancer deaths worldwide.

Though early detection facilitates early diagnosis and better patient outcomes, the disease’s silent progress to advanced stages makes it a challenge like none other. Low Dose CT (LDCT) remains the most effective means of screening for Lung Cancer. However, in LMICs, CTs can be prohibitively expensive, priced between USD 500 – 700, limiting their access. However, there is some hope.

Chest X-rays are one the most routinely performed exams in the world, representing 40% of the approximately 3.6 billion imaging tests that are performed annually. As a non-invasive diagnostic test with easy access and low costs, the chest X-ray is a valuable first line test to screen for radiological indications of issues in the lungs, heart, ribs, and more. Acquiring chest X-ray scans only takes minutes; but it warrants expert radiologists to read and analyze them.

Augmenting X-rays with the power AI

qXR, Qure.ai's AI-powered chest X-ray interpretation tool, can automatically detect and localize up to 30 abnormalities, including indicators of Lung Cancer, TB, and COVID-19. This is particularly impactful when millions of scans are examined using qXR to report any abnormalities that could otherwise be missed due to:

  • Lack of experienced personnel
  • Increased workloads, limiting access and time for detailed reads of abnormal scans
  • Incidental nodules indicative for Lung Cancer being missed because physicians are only looking at the results for which the X-rays were ordered and not incidental findings.

How Qure is making a difference

1. Working with grassroot level healthcare professionals

A. Leveraging Primary Care GP clinics in Malaysia

Qualitas Medical Group (QMG) is a chain of integrated general practice (GP) clinics, dental clinics, medical imaging centers, and ambulatory care management centers that play an integral role in Malaysia’s health system. Along with Lung Cancer Network Malaysia, QMG uses qXR to triage all chest X-rays taken of local workers, identifying incidental lung nodules that maybe indicative of lung cancer for further testing. qXR has also helped GPs to reduce their dependency on radiologists for second reads and reduced reporting turnaround time for chest X-rays from 2 days to the same day.

“Qure.ai’s state of the art deep learning technology is a potential game changer that will enhance and expedite diagnosis with rapid referral to relevant specialty “, said Dr Anand Sachithanandan, President, Lung Cancer Network Malaysia

B. Empowering Primary Care Physicians in Latin America

Primary care centers are the first medical care touchpoint and are crucial stakeholders for early diagnosis in disease care pathways. In collaboration with Lung Ambition Alliance, Latin America, Qure is empowering primary care physicians in 12 different countries with AI-enabled smart phone-based chest X-ray analysis and lung nodule screening.

In the absence of digital X-rays, physicians only need to click a picture of the X-ray film against a lightbox and upload it on the app to receive instant qXR analysis. Based on the results, they can guide the patient to the next appropriate steps.

2. Collaborating with Cancer Care Foundations

Assam is called India’s Cancer Capital as the state’s average cancer incident rate is double the national average. The high cancer burden, low public awareness, and a lack of specialised health-care infrastructure led the Govt. of Assam to partner with Tata Trusts and build the Assam Cancer Care Foundation (ACCF).

Potential lung cancer suspects are identified via door-to-door screenings as well as via a screening kiosk set up at the Fakhruddin Ali Ahmed Medical College and Hospital, Barpeta where ACCF have built a specialised cancer care unit. Chest X-rays of these individuals will be screened for detection of suspicious lung nodule(s) using qXR. Based on the result, they will either be called back for an LDCT/Biopsy or an oncology consultation.

3. Surveillance of all chest X-rays taken in a tertiary care hospital

The VPS Lakeshore, Kerala is a tertiary care hospital and a centre of excellence in Oncology and other specialities. It is well equipped to take up largescale screening programs and facilitate the required care continuum for high risk, suspected and confirmed disease cases. The hospital has a program in place where a tool surveys all chest X-rays taken to facilitate early detection of Lung Cancer.

Through our partnership with AstraZeneca, we have deployed qXR to scan all chest X-rays performed at the hospital to pick up possibly early cases of Lung Cancer. If any abnormal/nodule indicative cases are picked up by the software, it is instantly flagged to the radiologist/referring physician so that they can guide the patient along the next steps in the patient care pathway.

4. Public screening road shows

The Ministry Of Public Health, Thailand along with the AstraZeneca team initiated the “Don’t Wait. Get Checked” Lung Cancer Campaign in April ’22 at Central World Mall, in partnership with Banphaeo General Hospital, Digital Economy Promotion Agency (DEPA) and the Central Group. On the occasion of World No Tobacco Day, Qure.ai’s qXR was used to screen close to 200 people. The objective of this program was to directly impact Thailand's public health policies revolving around lung cancer.

Way Forward

“Building health systems that are resilient and sustainable will require finding new ways to prevent disease, diagnose patients earlier, and treat them more effectively. The benefits of the technology that Qure.ai offers align well with our corporate values, ultimately supporting our strategic objective to reshape healthcare delivery, close the cancer care gap and better chronic disease management, especially in low-to-middle income countries. We believe that innovative technology has the potential to transform patients’ outcomes, enabling more people to access care in timely, reliable and affordable ways, regardless of where they live”, said Pei-Chieh Fong, Medical VP, AstraZeneca International.

At the Davos World Economic Forum 2022, AstraZeneca pledged to join the WEF EDISON Alliance and committed to screening 5 million patients for lung cancer by 2025 in partnership with Qure.ai.

With the support of AstraZeneca Turkey, Qure.ai collaborated with Mersin University Hospital on a landmark study for the use of AI in Heart Failure detection, using our qXR suite. This study is an important indicator for the future of AI in healthcare and the use of technology to augment the efforts of physicians in the early detection of other diseases.

Categories
Uncategorized

Improving performance of AI models in presence of artifacts

Our deep learning models have become really good at recognizing hemorrhages from Head CT scans. Real-world performance is sometimes hampered by several external factors both hardware-related and human-related. In this blog post, we analyze how acquisition artifacts are responsible for performance degradation and introduce two methods that we tried, to solve this problem.

Medical Imaging is often accompanied by acquisition artifacts which can be subject related or hardware related. These artifacts make confident diagnostic evaluation difficult in two ways:

  • by making abnormalities less obvious visually by overlaying on them.
  • by mimicking an abnormality.

Some common examples of artifacts are

  • Clothing artifact- due to clothing on the patient at acquisition time See fig 1 below. Here a button on the patient’s clothing looks like a coin lesion on a Chest X Ray. Marked by red arrow.

clothing artifact

Fig 1. A button mimicking coin lesion in Chest X Ray. Marked by red arrow.Source.

  • Motion artifact- due to voluntary or involuntary subject motion during acquisition. Severe motion artifacts due to voluntary motion would usually call for a rescan. Involuntary motion like respiration or cardiac motion, or minimal subject movement could result in artifacts that go undetected and mimic a pathology. See fig 2. Here subject movement has resulted in motion artifacts that mimic subdural hemorrhage(SDH).

motion artifact

Fig 2. Artifact due to subject motion, mimicking a subdural hemorrhage in a Head CT.Source

  • Hardware artifact- See fig 3. This artifact is caused due to air bubbles in the cooling system. There are subtle irregular dark bands in scan, that can be misidentifed as cerebral edema.

hardware artifact edema

Fig 3. A hardware related artifact, mimicking cerebral edema marked by yellow arrows.Source

Here we are investigating motion artifacts that look like SDH, in Head CT scans. These artifacts result in increase in false positive (FPs) predictions of subdural hemorrhage models. We confirmed this by quantitatively analyzing the FPs of our AI model deployed at an urban outpatient center. The FP rates were higher for this data when compared to our internal test dataset.
The reason for these false positive predictions is due to the lack of variety of artifact-ridden data in the training set used. Its practically difficult to acquire and include scans containing all varieties of artifacts in the training set.

artifact mistaken for sdh

Fig 4. Model identifies an artifact slice as SDH because of similarity in shape and location. Both are hyperdense areas close to the cranial bones

We tried to solve this problem in the following two ways.

  • Making the models invariant to artifacts, by explicitly including artifact images into our training dataset.
  • Discounting slices with artifact when calculating the probability of bleed in a scan.

Method 1. Artifact as an augmentation using Cycle GANs

We reasoned that the artifacts were misclassified as bleeds because the model has not seen enough artifact scans while training.
The number of images containing artifacts is relatively small in our annotated training dataset. But we have access to several unannotated scans containing artifacts acquired from various centers with older CT scanners.(Motion artifacts are more prevalent when using older CT scanners with poor in plane temporal resolution). If we could generate artifact ridden versions of all the annotated images in our training dataset, we would be able to effectively augment our training dataset and make the model invariant to artifacts.
We decided to use a Cycle GAN to generate new training data containing artifacts.

Cycle GAN[1] is a generative adversarial network that is used for unpaired image to image translation. It serves our purpose because we have an unpaired image translation problem where X domain has our training set CT images with no artifact and Y domain has artifact-ridden CT images.

cycle gan illustration

Fig 5. Cycle GAN was used to convert a short clip of horse into that of a zebra.Source

We curated a A dataset of 5000 images without artifact and B dataset of 4000 images with artifacts and used this to train the Cycle GAN.

Unfortunately, the quality of generated images was not very good. See fig 6.
The generator was unable to capture all the variety in CT dataset, meanwhile introducing artifacts of its own, thus rendering it useless for augmenting the dataset. Cycle GAN authors state that the performance of the generator when the transformation involves geometric changes for ex. dog to cat, apples to oranges etc. is worse when compared to transformation involving color or style changes. Inclusion of artifacts is a bit more complex than color or style changes because it has to introduce distortions to existing geometry. This could be one of the reasons why the generated images have extra artifacts.

cycle gan images

Fig 6. Sampling of generated images using Cycle GAN. real_A are input images and fake_B are the artifact_images generated by Cycle GAN.

Method 2. Discounting artifact slices

In this method, we trained a model to identify slices with artifacts and show that discounting these slices made the AI model identifying subdural hemorrhage (SDH) robust to artifacts.
A manually annotated dataset was used to train a convolutional neural network (CNN) model to detect if a CT slice had artifacts or not. The original SDH model was also a CNN which predicted if a slice contained SDH. The probabilities from artifact model were used to discount the slices containing artifact and artifact-free slices of a scan were used in computation of score for presence of bleed.
See fig 7.

Method 2 illustration

Fig 7. Method 2 Using a trained artifacts model to discount artifact slices while calculating SDH probability.

Results

Our validation dataset contained 712 head CT scans, of which 42 contained SDH. Original SDH model predicted 35 false positives and no false negatives. Quantitative analysis of FPs confirmed that 17 (48%) of them were due to CT artifacts. Our trained artifact model had slice-wise AUC of 96%. Proposed modification to the SDH model had reduced the FPs to 18 (decrease of 48%) without introducing any false negatives. Thus using method 2, all scanwise FP’s due to artifacts were corrected.

In summary, using method 2, we improved the precision of SDH detection from 54.5% to 70% while maintaining a sensitivity of 100 percent.

confusion matrics

Fig 8. Confusion Matrix before and after using artifact model for SDH prediction

See fig 9. for model predictions on a representative scan.

artifact discount explanaation

Fig 9. Model predictions for few representative slices in a scan falsely predicted as positive by original SDH model

A drawback of Method 2 is that if SDH and artifact are present in the same slice, its probable that the SDH could be missed.

Conclusion

Using a cycle GAN to augment the dataset with artifact ridden scans would solve the problem by enriching the dataset with both SDH positive and SDH negative scans with artifacts over top of it. But the current experiments do not give realistic looking image synthesis results. The alternative we used, meanwhile reduces the problem of high false positives due to artifacts while maintaining the same sensitivity.

References

  1. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks by Jun-Yan Zhu et al.