Can We Upskill Radiographers through Artificial Intelligence?

Shamie Kumar describes how AI fits into a radiology clinical workflow and her perspective on how clinical radiographers could use this to learn from and enhance their skills.

AI in radiology and workflow 

We all know that AI is already here, actively being implemented and used in many trusts in seeing its real world value supporting radiology departments to solve current challenges.Often this is focused on benefits to radiologist, clinicians, reporting radiographers, patients, and cost savings, but what about clinical non-reporting radiographers undertaking the X-ray or scans – can AI benefit them too?Let’s think about how AI is implemented and where are the AI outputs displayed?

If the AI findings are seen in PACS, how many radiographers actually log into PACS after taking a scan or X-ray? Good practice is seen to have PACS open to cross-check images that have been sent from the modality. Often this doesn’t happen for various reasons but maybe it should be a part of the radiographers’ routine practice, just like post-documentation is.

Can Radiographers Up-Skill?

Given the view it does happen, radiographers will have the opportunity to look at the AI outputs and potentially take away learnings on whether the AI found something that they didn’t see initially or whether there was a very subtle finding. We all know people learn through experience, exposure, and repetition, so if the AI is consistently picking up true findings, then the radiographer can learn from it too.

But what about when AI is incorrect – could it fool a radiographer, or will it empower them to research and understand the error in more detail?

As with many things in life, nothing is 100% and this includes AI in terms of false positive and false negatives. The radiographers have the opportunity to research erroneous findings in more detail to enhance their learning, but do they actually have time to undertake additional learning and steps to interpret AI?

CPD, self-reflection, learning through clinical practice are all key aspects of maintaining your registration, and self-motivation is often key to furthering yourself and your career. The question remains: are radiographers engaged and self-motivated to be part of the AI revolution and use it to their professional benefit with potential learnings at their fingertips?

There have been a few recent publications that share insight on how AI is perceived by radiographers, what is their understanding, training and educational needs.

Many Universities like City University London and AI companies like are taking the initial steps in understanding this better and taking active efforts in filling the knowledge gap, training and understanding of AI in radiology.

Radiographers who are key part of any radiology pathway, are yet to see the real-world evidence on whether AI can upskill radiographers, but there is no doubt this will unfold with time.

About Shamie Kumar

Shamie Kumar is a practicing HCPC Diagnostic Radiographer; graduated from City University London with a BSc Honors in Diagnostic Radiography in 2009 and is a part of Society of Radiographers with over 12 years of clinical knowledge and skills within all aspects of radiography. She studied further in leadership, management, and counselling with a keen interest in artificial intelligence in radiology.


Akudjedu, T. K. K. N. M., 2022. Knowledge, perceptions, and expectations of Artificial intelligence in radiography practice: A global radiography workforce survey. Journal of Medical Imaging and Radiation Sciences.Coakley, Y. M. E. C. M. M., 2022. Radiographers’ knowledge, attitudes and expectations of artificial intelligence in medical imaging. Radiography International Journal of Diagnostic Imaging and Radiation Therapy, 28(4), pp. P943-948.

Malamateniou, K. P. W. H., 2021. Artificial intelligence in radiography: Where are we now and what does the future hold?. Radiography International Journal of Diagnostic Imaging and Radiation Therapy, 27(1), pp. 58-62.

Kumar, D., 2022. CoR endorsed CPD Super User Training by [Online]
Available at:



Is Artificial Intelligence a glorified red dot system?

Shamie Kumar describes her perspective on how radiography has evolved over time, the impact radiographers can have in detecting abnormal X-rays and reflects how she views fast approaching AI in advancing current skills.

 The red dot system 

Often one of the first courses a newly qualified radiographer attends is the red dot course. This course demonstrates pathologies and abnormalities often seen in x-rays some obvious, others not, giving radiographers the confidence to alert the referring clinician and/or radiologist that there is something abnormal they have seen. 

The red dot system is a human alert system, often 2 pairs of eyes are better than one and assist with near misses. How this is done in practice can vary between hospitals, in the era of films the radiographer would place a red dot sticker on the film itself before returning it to clinician or radiologist. In the world of digital imaging this is often done during ‘post documentation’ a term used once the x-ray is finished, the radiographer will complete the rest of the patient documentation to suggest the x-ray is complete, ready to be viewed and reported. As part of this process the radiographer can change the status of the patient to urgent along with a note for what has been observed. From this the radiologist knows the radiographer has seen something urgent on the image and the patient appears at the top of their worklist for reporting and, so the radiologist can view the radiographer’s notes. 

 The Role of AI in Radiology 

Artificial Intelligence (AI) is moving at a pace within healthcare and fast approaching radiology departments, with algorithms showing significant image recognition in detecting, characterisation and monitoring of various diseases within radiology. AI excels in automatically recognising complex patterns in imaging data providing quantitative assessments of radiological characteristics. With the numbers for diagnostic imaging requests forever increasing, many AI companies are focusing on how to ease this burden and supporting healthcare professionals. 

AI triage is done by the algorithm based on abnormal and normal finding’s this is used to create an alert for the referring clinician/radiologist. It can be customised to the radiologist, for example colour-coded flags, red for abnormal, green for normal, patients with a red flag would appear at the top of the radiologist worklist. For the referring clinicians who don’t have access to the reporting worklist, the triage would be viewed on the image itself with an additional text note suggesting abnormal or normal. 

What does AI do that a radiographer doesn’t already? AI is structured in the way it gives the findings for example a pre-populated report with its findings or an impression summary and its consistent without reader variability. So, the question now becomes what AI can do beyond the red dot system, here the explanation is straightforward, often a radiographer wouldn’t go to the extent of trying to name what they have seen, especially in more complex x-rays like the chest where there are multiple structures and pathologies. For example a radiographer would mention, right lower lobe and may not go beyond this, often due to confidence and level of experience. 

AI can fill this gap, it can empower radiographers and other healthcare professionals with its classification of pathologies identifying exactly what has been identified on the image, based on research and training of billions of data sets with high accuracy. 

The radiographers may have the upper hand with reading the clinical indication on the request form and seeing the patient physically, which undoubtable is of significant value, however the red dot system has many variables specific to that individual radiographer’s skills and understanding. It is also limited to giving details of what they have noted to just the radiologist, what about the referring clinician who doesn’t have access to the radiology information system (RIS) where the alert and notes are? Do some radiographers add a text note on the x-ray itself? 


Yes, AI is a technological advancement of the red dot system and will continue to evolve. It is structured in how it gives the findings, it does this consistently with confidence. Adding value to early intervention, accurate patient diagnosis, contributing to reducing misdiagnosis and near misses. AI is empowering radiographers, radiologist, referring clinicians and junior doctors by enhancing and leveraging their current knowledge to a level where there is consistent alerts and classified findings that can even be learned from. This doesn’t replace the red dot system but indeed enhances it. 

The unique value a radiographer adds to the patient care, experience and physical interaction can easily be supplemented with AI, allowing them to alert with confidence and manage patients, focusing the clinician time more effectively. 

About Shamie Kumar 

Shamie Kumar is a practicing HCPC Diagnostic Radiographer; graduating from City University London, BSc Honors in Diagnostic Radiography in 2009 and part of Society of Radiographers with over 10 years of clinical knowledge and skills within all aspects of radiography. She studied further in leadership, management and counselling with a keen interest in artificial intelligence in radiology. 


The Role of AI in Heart Failure Early Detection

Heart failure affects 6.2 million Americans each year, costing the US healthcare system $30.7 billion. Heart failure occurs when the heart cannot pump enough blood to meet the body’s needs. Early detection is critical in the treatment and management of heart failure. The use of AI in detecting heart failure on chest X-rays has the potential to improve the accuracy and speed of diagnoses significantly.

Heart failure is a severe and potentially life-threatening condition affecting millions worldwide. Heart failure is a serious and growing health concern in the United States, affecting 6.2 million Americans yearly. It is the leading cause of hospitalization in those over 65 years of age, contributing to the staggering $30.7 billion in estimated spending each year by the US healthcare system on heart failure alone. Hospitalization accounts for most of these costs, which are expected to increase to at least $70 billion annually by 2030.  

 This condition occurs when the heart cannot pump enough blood to meet the body's needs, leading to shortness of breath, fatigue, and swelling. Despite advances in medical technology and treatments, heart failure remains one of the country’s leading causes of death and hospitalization. 

AI to the resQue

Leveraging recent advances in medical technology, early detection, and faster time-to-treatment make increased survivability possible. In addition, by identifying and effectively managing risk factors such as high blood pressure and diabetes, healthcare professionals, patients, policymakers, and technology innovators can work together to help reduce the impact of this debilitating condition and improve the lives of those affected by heart failure.

Output generated by Chest X-ray AI Solution

Enlargement of heart in cases of heart failure

Early Detection

Early detection is critical in managing this condition, as the sooner it is diagnosed, the better the chances of recovery. Chest X-rays have long been used as a diagnostic tool in detecting heart failure, but this process has become much more precise and efficient with the advent of artificial intelligence (AI). 

 Qure's qXR for Heart Failure’s Artificial Intelligence algorithm, qXR-HF, helps in the early detection of heart failure on chest X-rays by analyzing and interpreting abnormalities on medical imaging outputs. AI algorithms can identify patterns and features in X-rays that may indicate heart failures, such as an enlarged heart, abnormal cardiothoracic ratio, or fluid buildup (Pleural effusion) . These algorithms can quickly process images in less than 60 seconds, allowing for early and efficient diagnoses. Additionally, qXR-HF can help reduce human error and improve accuracy in detection. This is particularly important in the case of heart failure, as early detection can greatly improve the chances of successful treatment and recovery. 

 A significant advantage of using AI in detecting heart failure is improved accuracy. In addition, AI algorithms are less prone to human error and can help reduce misdiagnosis risk, leading to delayed treatment and potentially serious medico-legal consequences. 

 The use of AI in detecting heart failure on chest X-rays has the potential to greatly improve the accuracy and speed of diagnoses. By leveraging the power of AI algorithms, technology can help healthcare professionals make more informed decisions and provide patients with the best possible care. As the field of AI continues to evolve and improve, we will likely see even more advanced applications in the diagnosis and treatment of heart failure and other conditions. 


Prospective Observational Study at Frimley Health NHS Foundation Trust


The increase in complexities of diseases has led to radiologists reporting increasing numbers of different imaging modalities, as well as undertaking specialist clinics, ultrasound lists, and interventional procedures which are highly complex. The increasing reporting workload has not seen the correlating increase in number of Radiologists to ensure timely and accurate reporting of all the imaging modalities. Latest guidance indicates that the NHS radiologist workforce is now short-staffed by 33% and by 2025 the UK’s radiologist shortfall will reach 44% (RCR, 2021).

AI technology has the potential to integrate into the clinical pathway and help Radiologists with the ever-increasing backlog of reporting.

Frimley Health NHS Foundation Trust

Frimley Health NHS Foundation Trust consist of 3 hospitals, Wexham Park, Heatherwood and Frimley Park and serves up to 1.3 million residents, and is a well performing trust in radiology turnaround time, to stay on top of timely chest radiograph reporting. Frimley Health has invested itself to adopt AI solutions to assist the Trust to improve workflow efficiency and support clinicians, and ultimately patients. Dr Amrita Kumar, Consultant Radiologist and AI Lead for Frimley Health will be leading a 6-month pilot with in using qXR to support the timely reporting of chest radiographs in the GP and outpatient setting.

6 Month Service Evaluation using qXR 

Chest X-ray is often first line imaging for symptoms relating to lung cancer, due to X-rays being readily available, low cost, fast acquisition time and supports initial diagnosis prior to further imaging.

This evaluation will test the accuracy of qXR in classifying an unremarkable chest X-ray from one with findings in a clinical setting.  The Qure’s PACS viewer application will be actively used for the first time in the UK in the initial phase, to ensure the readers are blinded to the AI results. The outcomes will be assessed to demonstrate the capability of qXR in identifying unremarkable scans with a high negative predictive value.

Phase 2 will consist of qXR integration with the hospital system information system EPIC, which will allow a seamless experience to all users. AI findings will be viewed alongside the original X-ray, data will be collected throughout the study in understanding the value of AI in reducing report turnaround time and workflow efficiency.

“I think AI has a great potential to help Radiology departments maintain their service levels with increasing workloads, allowing Consultant Radiologists to focus on more complex patient-facing cases." – Dr. Amrita Kumar  

Consultant Radiologist and AI Clinical Lead, Frimley Health NHS Foundation Trust

In Focus Uncategorized

Burning Issue: Why Opportunistic Screening for Lung Cancer is the need of the hour

'Cancer Cures Smoking'

Did the above line make you look twice and think thrice? Years ago, the Cancer Patients Aid Association published this thought-provoking message, a genuinely fresh view on the relationship between tobacco and cancer. And why not?

Extensive research from across the world indicates that cigarette smoking can explain almost 90% of lung cancer risk in men and 70 to 80% in women. The WHO lists tobacco use as the first risk factor for cancer. The World Cancer Research Fund International goes a step further and plainly calls out smoking. With lung cancer racking up 2.21 million cases in 2021 and 1.8 million deaths, one can understand why healthcare stakeholders want to focus efforts on targeting common causes and reducing incidents of the disease.

Yet, a recent study indicates troubling trends.

Medanta Hospital is one of India’s leading medical facilities. Their research on lung cancer prevalence, conducted over a decade between 2012 – 2022 amongst 304 patients threw up a startling statistic – 50% of their lung cancer patient cohort were non-smokers. According to the doctors who conducted the research, Dr Arvind Kumar, Dr. Belal Bin Asaf and Dr. Harsh Puri, this was a sharp rise from earlier figures for non-smoking lung cancer patients (10-20%). But, there’s more.

The study indicates that, be it smokers or non-smokers, the risk group for lung cancer has expanded to a relatively more youthful population.

The WHO previously flagged a key factor for the rising trend in young, non-smokers being at risk for lung diseases – air pollution. Dr. Tedros Adhanom Ghebreyesus called air pollution a ‘silent public health emergency’ and ‘the new tobacco’. It presents clinicians working to treat and prevent lung cancer with a new conundrum – evaluating risk factors for the disease.

Simply put, how does one tackle the risk of lung cancer in a 25-year-old, non-smoking individual living a reasonably healthy lifestyle when a risk factor could be the simple act of breathing?

According to Dr. Matthew Lungren, the answer could be Opportunistic Screening – which he calls, “… the BEST use case for AI in radiology” concurs. qXR, our artificial intelligence (AI) solution for chest X-rays, has been tried, tested and trusted to assist in identifying and reporting missing nodules, which highlights the importance of opportunistic screening for identifying potential lung cancers early.

All our recent studies, including the one with Massachusetts General Hospital (MGH) in a retrospective multi-center study, investigated and concluded that Qure’s CE approved qXR could identify critical findings on Chest X-Rays, including malignant nodules.  This spurs the possibility that opportunistic screening for indicators of lung cancer and other pulmonary diseases should become the norm.’s solutions, can truly make the difference, augmenting the efforts of clinicians and radiologists any and every time a Chest X-ray or Chest CT is conducted.

November is Lung Cancer Awareness Month. What better moment than the last day of the month to urge everyone to think outside the box when it comes to demographics, risk factors, screening, and the role of AI in healthcare.


Role of an AI for Reliable Screening of Abnormality in X-rays

Role of an AI for Reliable Screening of Abnormality in X-rays: A Prospective Multicenter Study on Operational Efficiency using a CE approved solution

Chest X-rays are the most common diagnostic imaging technique used in clinical practice. The patient care pathway is, however, significantly hampered in most high-volume healthcare centers. Therefore, an Artificial Intelligence system that can quickly, reliably, and accurately identify anomalies in chest X-rays is being agreed as essential for enhancing radiological workflow.   

Here are some of the key challenges in correctly identifying the abnormalities on a chest X-ray (CXR)  

  • Scarcity of radiologists with the necessary training  
  • Overwhelming workload in large healthcare facilities  
  • CXR interpretations are highly subjective due to the presence of overlapping tissue structures.  

For example: Sometime even well-trained radiologists find it challenging to differentiate between the lesions or correctly identify very obscure pulmonary nodules   

Objective: To conduct a CE-approved post-market study which is also a prospective multi-center and multi-reader study prospective multicenter quality-improvement study. The team evaluated whether artificial intelligence (AI) can be used as a chest X-ray screening tool in real clinical settings. 

Method: A team of expert radiologists used’s CE- approved AI-based chest X-ray screening tool (qXR) as a part of their daily reporting routine to report consecutive chest X-rays for this prospective multicentre study. This study took place in a large radiology network in India for a period of 10 months. This was done is over 35 + sites by ~120 expert radiologists.  

Study Highlights

  • A total of 65,604 chest X-rays (CXRs) were processed from a network of 35 centers during the study period. 
  • Turnaround Time (TAT) decreased by about 40.63% from pre-AI period to post-AI period. 
  • The high NPV (98.9%) in categorizing normal and abnormal CXR with confidence demonstrates the utility of Qure’s AI as a screening tool in high-volume settings. 
  • The 1.1% missed CXRs were non-critical and non-actionable x-rays which don't need follow-up. 

Investigator Comments

“AI-based chest X-ray solution (qXR) screened chest X-rays and assisted in ruling out normal patients with high confidence, thus allowing the radiologists to focus more on assessing pathology on abnormal chest X-rays and treatment pathways.” 

“qXR helped decrease the mean TAT by over 40%, and 99% of the AI reported normal CXRs were actually normal.” 

Dr. Arunkumar Govindarajan,
 Director and Radiologist, Aarthi Scans and Labs


  • The study has prospectively demonstrated that using AI as an assistance tool can be beneficial in high-workload healthcare facilities. 
  • The Study showcased how AI can shorten the time patients must wait to receive the final report, particularly in normal circumstances. Normal and abnormal CXR are on the same worklist in a typical system, making it impossible to separate or prioritize normal CXR without opening the CXR.  
  • AI can reduce report errors and missed diagnoses by acting as a secondary reader. 
  • The use of AI will result in more appropriate treatments for the disease beyond a reduction in reporting time and an improvement in report quality. 


Gain CoR endorsed super user training

CoR endorsed CPD Super User Training by became the first company to receive an endorsement from The College of Radiographers (CoR) for CPD Super User Training. This training helps in obtaining’s Artificial Intelligence Super User Certificate. This training is a fantastic opportunity to receive and get a CPD Certificate. is the first AI company in the UK to receive this kind of endorsement.

COR knowledge outcomes are,

[CoR 02] Knowledge base
[CoR 03] Work safely
[CoR 06] Manage knowledge/information
[CoR 09] Interprofessional/agency working or learning
[CoR 11] Workforce development or staff governance

What is CPD and how does it help:

Continuing Professional Development (CPD) is to help improve the safety and quality of care provided for patients and the public in the UK. 

CPD Helps
-in updating the latest changes in medical practices. 
-in maintaining the professional standards required. 
-in annual appraisal to show that one has met the requirements for revalidation.

Health professionals are responsible for:

  1. identifying their CPD needs
  2. undertaking CPD activities that are relevant to their practice and support professional development 

CPD should be focussed on four primary domains: 

  1. knowledge, skills, and performance  
  2. safety and quality  
  3. communication, partnership, and teamwork  
  4. maintaining trust 

Evidence of CPD is vital in the annual appraisals. A CPD portfolio would typically include a selection of activities in at least 3 of the following categories: 

  1. Work-based 
  2. Professional 
  3. Formal 
  4. Self-directed 
  5. Other learning 

Healthcare professionals must undertake 35 hours of Continuing Professional Development (CPD) relevant to their scope of practice over the three years before their registration renewal. 

Who are all eligible for this training

All medical professionals registered under The Health and Care Professionals Council are eligible for this training

  1. Maintain a continuous, up-to-date, and accurate record of their CPD activities. 
  2. Demonstrate that their CPD activities are a mixture of learning activities relevant to current or future practice. 
  3. Seek to ensure that their CPD has contributed to the quality of their practice and service delivery. 
  4. Seek to ensure that their CPD benefits the service user. 
  5. Upon request, present a written profile (which must be their work and supported by evidence) explaining how they have met the Standards for CPD. 

For more details, reach out to us is committed to protecting and respecting your privacy and processes your information as our Privacy Policy.


    Taking No Chances: Opportunistic Screening’s Role in Early Lung Cancer Detection

    Key Highlights

    • Over 20M Chest CTs are performed every year in the USA alone  
    • Every chest CT scan is a potential lung cancer screening opportunity 
    • Chest CT scanning increased significantly during the pandemic 
    • conducted a deep-learning study to use CT scans for COVID to screen for actionable nodules


    Jackson Brown, Jr. once said that nothing is more expensive than a missed opportunity. Lung cancer is perhaps the ideal example of this because incidental/early detection via opportunistic screening can play a significant role in helping to successfully combat the malady. 

    Lung cancer accounts for 1 in 5 cancer deaths yearly; the leading cause of cancer-related deaths worldwide. It accounts for the greatest economic and public health burden of all cancers annually; approximately $180 billion. This is also because the prognosis for lung cancer is poor compared to other cancers, largely due to a high proportion of cases being detected at an advanced stage  where treatment options are limited, and the 5-year survival rate is only 5-15%.The global pandemic strained healthcare systems worldwide also leading to significant increase in the chest CT volumes.  

    “Earlier we would conduct approximately 300 chest CT scans per month. During the pandemic, this number rose to 7000 per month. It put a severe strain on doctors who must review every scan. Qure’s AI solution, qCT, made a significant difference to us by flagging missed actionable nodules on chest CT scans for further follow-ups & investigations.”
    – Arpit Kothari, CEO,

    The large volume of scans during the pandemic allowed to conduct a study using a deep-learning approach towards opportunistic screening for actionable lung nodules.


    The study uses’s deep-learning approach to identify lung nodules on CT scans from patients who were scanned for COVID-19 from 5 radiology centers across different cities in India.  

    The scans were sourced from, a leading radiology service provider in Central India and Aarthi Scans & Labs, yet another major diagnostic provider with 40 full-fledged diagnostic centers across India.

    2502 scans were randomly selected from Chest CTs performed at 5 sites in two specialist radiology chains, Aarthi Scans and bodyScans during India’s 2nd and 3rd wave of Covid. They were processed by qCT, Qure’s AI capable of detecting and characterizing lung nodules. The radiologist report of the cases flagged by qCT were investigated for findings suggestive of cancer. Flagged cases for which the nodule was not reported were re-read by an independent radiologist with AI assistance on a web portal. They were asked to either confirm or reject the flag, rate the nodule for malignancy potential if confirmed or provide alternate finding if rejected (See Figure). 


    • 2502 CT scans were processed in total.  
    • Of these, 23.7% were flagged by qCT and re-read by an independent thoracic radiologist.  
    • In 19.4% of these flagged cases, the radiologist agreed that there were unreported actionable nodules.  
    • There were 19 cases where radiologists did not rule out the risk of malignancy and 2 out of these were rated as probably malignant.  


    In the study,’s AI tool has assisted in reporting missed nodules which highlights the importance of opportunistic screening for identifying potential lung cancers early.  The need to improve efficiency and speed of clinical care continues to drive multiple innovations into practice, including AI. With the increasing demand for superior health care services and the large volumes of data generated daily from parallel streams, streamlining of clinical workflows has become a pressing issue. In our study, by using AI as a safety net, we found 21 chest CTs that should have warranted follow-up management for the patients. 

    “Early detection plays a critical role in successfully treating Lung Cancer. Yet, there are several factors which contribute to the significant risk of these nodules getting missed in chest CT scans. Qure’s AI solution, qCT is immensely useful because it acts as a safety net; another pair of eyes to ensure that we clinicians can identify those patients who need immediate help. Eventually, AI can augment our efforts to defeat the disease.”
    – Dr. Arunkumar Govindarajan, Director, Aarthi Scans & Labs


    AI-Based Gaze Deviation Detection to Aid LVO Diagnosis in NCCT


    Strokes occur when blood supply to the brain is interrupted or reduced, depriving brain tissue of oxygen and nutrients. It is estimated that a patient can lose 1.9 million neurons each minute when a stroke is untreated. So, the treatment of stroke is a medical emergency that requires early intervention to minimize brain damage and complications. Furthermore, a stroke caused by emergent large vessel occlusion (LVO) requires a much more prompt identification to improve clinical outcomes.

    Neuro interventionalists need to activate their operating rooms to prepare candidates identified for endovascular therapy (EVT) as soon as possible. As a result, identifying imaging findings on non-contrast computed tomography (NCCT) that are predictive of LVO would aid in identifying potential EVT candidates. We present and validate gaze deviation as an indicator to detect LVO using NCCT. In addition, we offer an Artificial Intelligence (AI) algorithm to detect this indicator.

    What is LVO?

    Large vessel occlusion (LVO) stroke is caused by a blockage in one of the following brain vessels:

    1. Internal Carotid Artery (ICA) 
    2. ICA terminus (T-lesion; T occlusion) 
    3. Middle Cerebral Artery (MCA) 
    4. M1 MCA 
    5. Vertebral Artery 
    6. Basilar Artery

    Image source: Science direct

    LVO strokes are considered one of the more severe kinds of strokes, accounting for approximately 24% to 46% of acute ischemic strokes. For this reason, acute LVO stroke patients often need to be treated at comprehensive centers that are equipped to handle LVOs. 

    Endovascular Treatment (EVT)

    EVT is a treatment given to patients with acute ischemic stroke. Using this treatment, clots in large vessels are removed, helping deliver better outcomes. EVT evaluation needs to be done at the earliest for the patients that meet the criteria and are eligible. Early access to EVT increases better outcomes for patients.  The timeframe to perform is usually between 16 – 24 hours in most acute ischemic cases.

    Image Source: PennMedicine

    Goal for EVT

    Since it is important to perform this procedure as early as possible, how do we get there?

    LVO detection on NCCT

    There is a 3 point step to consider for this:

    1. Absence of blood
    2. Hyperdense vessel sign or dot sign
    3. Gaze deviation (often overlooked on NCCT) 

    Gaze deviation and its relationship with acute stroke

    Several studies suggest that gaze deviation is largely associated with the presence of LVO [1,2,3].

    Stroke patients with eye deviation on admission CT have higher rates of disability/death and hemorrhagic transformation. Consistent assessment and documentation of radiological eye deviation on acute stroke CT scan may help with prognostication [4].

    AI algorithm to identify gaze deviation

    We developed an AI algorithm that reports the presence of gaze deviation given an NCCT scan. Such AI algorithms have tremendous potential to aid in this triage process. The AI algorithm was trained using a set of scans to identify gaze direction and midline of the brain. The gaze deviation is calculated by measuring the angle between the gaze direction and the midline of the brain. We used this AI algorithm to identify clinical symptoms of ipsiversive gaze deviation in stroke patients with LVO treated with EVT. The AI algorithm has a sensitivity and specificity of 80.8% and 80.1% to detect LVO using gaze deviation as the sole indicator. The test set had 150 scans with LVO-positive cases where thrombectomy was performed.


    Ipsiversive Gaze deviation on NCCT is a good predictor of LVO due to proximal vessel occlusions in ICA terminus and M1 occlusions. However, it is a poor predictor of LVO due to M2 occlusion. We report an AI algorithm that can identify this clinical sign on NCCT. These findings can aid in the triage of LVO patients and expedite the identification of EVT candidates. 

    We are presenting this AI method at SNIS 2022, Toronto. Please attend our oral presentation on 28th July 2022 at 12:15 PM (Toronto time).


    Upadhyay, Ujjwal & Golla, Satish & Kumar, Shubham & Szweda, Kamila & Shahripour, Reza & Tarpley, Jason. (2022). Society of NeuroInterventional Surgery SNIS


    Ultrasound AI for Cardiovascular Disease Prevention

    Key Using ultrasound AI to prevent cardiovascular diseases through early detection of atherosclerosis.

    Key Highlights

    • Cardiovascular diseases (CVD) cause ~32% of deaths; the leading cause of death globally
    • qVH is the first known solution for AI-guided vascular ultrasound (carotid) that can boost disease prevention using point-of-care-ultrasound (POCUS) devices.  
    • With qVH, high-risk individuals can be screened by any clinician/ remote health worker at any convenient location 
    • Patients can be identified before symptoms appear, allowing for earlier disease management, improved patient outcomes, and reduced costs

    Using AI to maximize the potential of POCUS for disease prevention has developed an AI product for vascular health called qVH. It is the first known solution that guides clinicians during a carotid artery scan. Here’s how qVH’s AI works:

    Probe Navigation Guidance: Detects probe location (CCA, ICA etc.) & orientation (long/short axis) and recommends best way to reach next step of protocol.

    Plaque Detection & Characterization Guidance: Auto-detects abnormalities in a live scan (video) and quantifies it when a diagnostic quality image is available

    Image Quality Guidance: Tracks image quality while scanning, recommends steps to maintain diagnostic quality & auto-captures images.

    Device Setting Guidance: Detects common errors in ultrasound device settings in Pulse Wave (PW) mode and recommends changes for accurate PW velocity measurements. Thereby, preventing errors that could lead to misdiagnosis. 

    *qVH is not FDA approved/CE marked yet and is currently meant for investigational or research use only.

    The need for qVH

    ~20% of strokes in adults are caused by the narrowing of the carotid artery (see image; bottom). Buildup of plaque (fatty deposits) in the arteries is the root cause for this narrowing; a condition that is known as atherosclerosis (see image; top).

    Plaques can develop in different vessels leading to artery narrowing/clots and hence reduced blood flow to various parts of our body. This leads to critical events such as:

    • Stroke (Carotid artery)
    • Heart Attack (Coronary Artery)
    • Renal Ischemia (Renal Artery), etc.

    Evidence suggests that ~0-3% of the general population have a severe form of this disease but without any symptoms, while ~35% of diabetic patients have carotid plaques with or without symptoms. 

    However, preventive measures to reduce disease burden have been limited due to:

    • Lack of clear guidelines
    • Logistical challenges like hospital visits for USG scan
    • Depending on operator skills for accurate scanning & reporting using USG
    • High cost of vascular USG
    • Lack of sufficiently skilled clinicians to perform vascular USG. 

    Significant developments in the last decade 

    Price: USG machines have become cheaper (by ~80%), more portable (handheld & wireless) and easily accessible with the arrival of point-of-care ultrasounds (POCUS).

    Preference: USG is gradually becoming the preferred modality for disease prevention.

    • American Heart Association (AHA) recommends carotid artery duplex scanning in patients with high-risk features undergoing coronary artery bypass graft (CABG) surgery.
    • European Society of Cardiology (ESC) recommends carotid duplex ultrasound for evaluating the extent and severity of carotid stenosis.

    Proof: Real-world evidence suggests benefits in risk-stratification through carotid artery screening.

    • Reduction in mortality rates (~10%), early disease diagnosis (~4.5 yrs), reduction in patient costs (by ~50%). [VIPVIZA]
    • Re-classification of low-risk patients to mid/high-risk based on the presence of carotid plaque. [Swiss AGLA]

    Advantages: Govt. backed reimbursements for using advanced ultrasound technology.

    • US Hospitals can claim NTAP reimbursement of ~$1868 per patient diagnosed using ultrasound guidance technology (Caption Guidance) for CVD prevention.
    • American Medical Association (AMA) has introduced 2 new CPT codes for quantitative USG tissue characterization with ACR proposing an additional $82/scan.

    What is holding back ultrasound-based CVD prevention programs?

    POCUS devices have solved problems related to accessibility and affordability. But they have amplified issues related to operator skills since these ultra-portable POCUS devices can be used in any setting (remote areas, sports ground, battlefieldetc.) by most clinicians. The existing issues are:

    • Training is needed to perform the ultrasound scan as per the defined protocol.
    • Training is needed to capture diagnostic quality images from a running video (cine loop).
    • There are chances of misdiagnosis due to errors in:
      • Performing ultrasound measurements manually. Eg: plaque length/area, PW ESV/PDV, Degree of Stenosis etc.
      • Optimizing device settings manually. Ex: gate size/angle/position, box angle.
      • Detecting & quantifying abnormalities (plaques, stenosis, etc.).
    • Inter-operator variability due to operator dependence for probe navigation, abnormality detection, image capture and for optimization of USG device settings. 

    In 2020, POCUS accounted for only ~3-5% of the ultrasound market by revenues. However,  global POCUS market revenue is predicted to increase to $4B by 2030 from $2B in 2020.

    qVH has been designed to address all of the issues outlined above. qVH validation has begun at 2 sites in India & Argentina and will be expanded to 8 sites across Asia, Europe and North America within the next 3 months. qVH can be used with all existing ultrasound machines. Our beta sites are using Cart and POCUS machines.