AI-Based Gaze Deviation Detection to Aid LVO Diagnosis in NCCT


Strokes occur when blood supply to the brain is interrupted or reduced, depriving brain tissue of oxygen and nutrients. It is estimated that a patient can lose 1.9 million neurons each minute when a stroke is untreated. So, the treatment of stroke is a medical emergency that requires early intervention to minimize brain damage and complications. Furthermore, a stroke caused by emergent large vessel occlusion (LVO) requires a much more prompt identification to improve clinical outcomes.

Neuro interventionalists need to activate their operating rooms to prepare candidates identified for endovascular therapy (EVT) as soon as possible. As a result, identifying imaging findings on non-contrast computed tomography (NCCT) that are predictive of LVO would aid in identifying potential EVT candidates. We present and validate gaze deviation as an indicator to detect LVO using NCCT. In addition, we offer an Artificial Intelligence (AI) algorithm to detect this indicator.

What is LVO?

Large vessel occlusion (LVO) stroke is caused by a blockage in one of the following brain vessels:

  1. Internal Carotid Artery (ICA) 
  2. ICA terminus (T-lesion; T occlusion) 
  3. Middle Cerebral Artery (MCA) 
  4. M1 MCA 
  5. Vertebral Artery 
  6. Basilar Artery

Image source: Science direct

LVO strokes are considered one of the more severe kinds of strokes, accounting for approximately 24% to 46% of acute ischemic strokes. For this reason, acute LVO stroke patients often need to be treated at comprehensive centers that are equipped to handle LVOs. 

Endovascular Treatment (EVT)

EVT is a treatment given to patients with acute ischemic stroke. Using this treatment, clots in large vessels are removed, helping deliver better outcomes. EVT evaluation needs to be done at the earliest for the patients that meet the criteria and are eligible. Early access to EVT increases better outcomes for patients.  The timeframe to perform is usually between 16 – 24 hours in most acute ischemic cases.

Image Source: PennMedicine

Goal for EVT

Since it is important to perform this procedure as early as possible, how do we get there?

LVO detection on NCCT

There is a 3 point step to consider for this:

  1. Absence of blood
  2. Hyperdense vessel sign or dot sign
  3. Gaze deviation (often overlooked on NCCT) 

Gaze deviation and its relationship with acute stroke

Several studies suggest that gaze deviation is largely associated with the presence of LVO [1,2,3].

Stroke patients with eye deviation on admission CT have higher rates of disability/death and hemorrhagic transformation. Consistent assessment and documentation of radiological eye deviation on acute stroke CT scan may help with prognostication [4].

AI algorithm to identify gaze deviation

We developed an AI algorithm that reports the presence of gaze deviation given an NCCT scan. Such AI algorithms have tremendous potential to aid in this triage process. The AI algorithm was trained using a set of scans to identify gaze direction and midline of the brain. The gaze deviation is calculated by measuring the angle between the gaze direction and the midline of the brain. We used this AI algorithm to identify clinical symptoms of ipsiversive gaze deviation in stroke patients with LVO treated with EVT. The AI algorithm has a sensitivity and specificity of 80.8% and 80.1% to detect LVO using gaze deviation as the sole indicator. The test set had 150 scans with LVO-positive cases where thrombectomy was performed.


Ipsiversive Gaze deviation on NCCT is a good predictor of LVO due to proximal vessel occlusions in ICA terminus and M1 occlusions. However, it is a poor predictor of LVO due to M2 occlusion. We report an AI algorithm that can identify this clinical sign on NCCT. These findings can aid in the triage of LVO patients and expedite the identification of EVT candidates. 

We are presenting this AI method at SNIS 2022, Toronto. Please attend our oral presentation on 28th July 2022 at 12:15 PM (Toronto time).


Upadhyay, Ujjwal & Golla, Satish & Kumar, Shubham & Szweda, Kamila & Shahripour, Reza & Tarpley, Jason. (2022). Society of NeuroInterventional Surgery SNIS


Time is Brain: AI helps cut down stroke diagnosis time in the Himalayan foothills

Stroke is a leading cause of death. Stroke care is limited by the availability of specialized medical professionals. In this post, we describe a physician-led stroke unit model established at Baptist Christian Hospital (BCH) in Assam, India. Enabled with qER, Qure’s AI driven automated CT Brain interpretation tool, BCH can quickly and easily determine next steps in terms of treatment and examine the implications for clinical outcomes.

qER at a Stroke unit

Across the world, Stroke is a leading cause of death, second only to ischemic heart disease. According to the the World Stroke Organization (WSO), 13.7 million new strokes occur each year and there are about 80 million stroke survivors globally. In India as per the Health of the Nation’s State Report we see an incidence rate of 119 to 152/100000, and has a case fatality rate of 19 to 42% across the country.

Catering to tea plantation workers in and around the town of Tezpur, the Baptist Christian Hospital, Tezpur (BCH) is a 130-bed secondary care hospital in the North eastern state of Assam in India. This hospital is a unit of the Emmanuel Hospital Association, New Delhi. From humble beginnings, offering basic dispensary services, the hospital grew to become one of the best healthcare providers in Assam, being heavily involved in academic and research work at both national and international levels.

Nestled below the Himalayas, interspersed with large tea plantations, Assamese indigenous population and tea garden workers showcase a prevalence of hypertension, the largest single risk factor of stroke, reportedly between 33% to 60.8%. Anecdotal reports and hospital-based studies indicate a huge burden of stroke in Assam – a significant portion of which is addressed by Baptist Hospital. Recent study showed that hemorrhagic strokes account for close to 50% of the cases here, compared to only about 20% of the strokes in the rest of India.

Baptist Christian Hospital

Baptist Christian Hospital, Tezpur. Source

Challenges in Stroke Care

One of the biggest obstacles in Stroke Care is the lack of awareness of stroke symptoms and the late arrival of the patient, often at smaller peripheral hospitals, which are not equipped with the necessary scanning facilities and the specialists, leading to a delay in effective treatment.

The doctors and nurses of the Stroke Unit at BCH, Tezpur were trained online by specialist neurologists, who in turn trained the rest of the team on a protocol that included Stroke Clinical Assessment, monitoring of risk factors and vital parameters, and other supportive measures like management of Swallow assessment in addition to starting the rehabilitation process and advising on long term care at home. A study done at Tezpur indicated that post establishment of Stroke Unit, there was significant improvement in the quality of life along with reduction in deaths compared to the pre-Stroke Unit phase.

This is a crucial development in Stroke care especially in the low and middle income countries(LMIC) like India, to strengthen the peripheral smaller hospitals which lack specialists and are almost always the first stop for patients in emergencies like Stroke.

Stroke pathway barriers

This representative image details the acute stroke care pathway. Source

The guidelines for management of acute ischemic stroke involves capturing a non-contrast CT (NCCT) study of the brain along with CT or MRI angiography and perfusion and thrombolysis-administration of rTPA (Tissue Plasminogen Activator) within 4.5 hours of symptom onset. Equipped with a CT machine and teleradiology reporting, the physicians at BCH provide primary intervention for these stroke cases after a basic NCCT and may refer them to a tertiary facility, as applicable. They follow a Telestroke model-in cases where thrombolysis is required, the ER doctors consult with neurologists at a more specialized center and the decision making is done upon sharing these NCCT images via phone-based mediums like WhatsApp while severe cases of head trauma are referred for further management to far away tertiary facilities. There have been studies done on a Physician based Stroke Unit model in Tezpur, that has shown an improvement in treatment outcomes.

How is helping BCH with stroke management?

BCH and Qure have worked closely since the onset of the COVID-19 pandemic, especially at a time when confirmatory RT-PCR kits were limiting. qXR, Qure’s AI aided chest X-ray solution had proved to be a beneficial addition for identification of especially asymptomatic COVID-19 suspects and their treatment and management, beyond its role in comprehensive chest screening.

qER messages


In efforts to improve the workflow of stroke management and care at the Baptist hospital, qER, FDA approved and CE certified software which can detect 12 abnormalities was deployed. The abnormalities including five types of Intracranial Hemorrhages, Cranial Fractures, Mass effect, midline Shift, Infarcts, Hydrocephalus, Atrophy etc in less than 1-2 minutes of the CT being taken. qER has been trained on CT scans from more than 22 different CT machine models, thus making it hardware agnostic. In addition to offering a pre-populated radiology report, the HIPAA compliant qER solution is also able to label and annotate the abnormalities in the key slices.

Since qER integrates seamlessly with the existing technical framework of the site, the deployment of the software was completed in less an hour along with setting up a messaging group for the site. Soon after, within minutes of taking the Head CT, qER analyses were available in the PACS worklist along with messaging alerts for the physicians’ and medical team’s review on their mobile phones.

The aim of this pilot project was to evaluate how qER could add value to a secondary care center where the responsibility for determination of medical intervention falls on the physicians based on teleradiology report available to them in a span of 15-60 minutes. As is established with stroke care, every minute saved is precious.

Baptist Christian Hospital

Physician using qER

At the outset, there were apprehensions amongst the medical team about the performance of the software and its efficacy in improving the workflow, however, this is what they have to say about qER after 2 months of operation:

“qER is good as it alerts the physicians in a busy casualty room even without having to open the workstation. We know if there are any critical issues with the patient” – Dr. Jemin Webster, a physician at Tezpur.

He goes on to explain how qER helps grab the attention of the emergency room doctors and nurses to critical cases that need intervention, or in some instances, referral. It helps in boosting the confidence of the treating doctors in making the right judgement in the clinical decision-making process. It also helps in seeking the teleradiology support’s attention into the notified critical scans, as well as the scans of the stroke cases that are in the window period for thrombolysis. Dr. Jemin also sees the potential of qER in the workflow of high volume, multi-specialty referral centers, where coordination between multiple departments are required.

The Way Ahead

A technology solution like qER can reduce the time to diagnosis in case of emergencies like Stroke or trauma and boosts the confidence of Stroke Unit, even in the absence of specialists. The qER platform can help Stroke neurologists in the Telestroke settings access great quality scans even on their smartphones and guide the treating doctors for thrombolysis and further management. Scaling up this technology to Stroke units and MSUs can empower peripheral hospitals to manage acute Stroke especially in LMICs.

We intend to conduct an observational time-motion study to analyze the Door-to- Needle time with qER intervention via instant reports and phone alerts as we work through the required approvals. Also in the pipeline is performance comparison of qER reporting against the Radiologist report as ground truth along with comparison of clinical outcomes and these parameters before and after introduction of qER into the workflow. We also plan to extend the pilot project to Padhar Mission Hospital, MP and the Shanthibhavan Medical Center, Simdega, Jharkhand.

Qure team is also working on creating a comprehensive stroke platform which is aimed at improving stroke workflows in LMICs and low-resource settings.


Interview with Dr Mustafa Biviji – Artificial Intelligence and the Future of Radiology

With close to 30 years of radiology experience, Dr Biviji is an eminent radiologist based in Nagpur. He is an authority on developing deep learning solutions to radiology problems and works closely with early-stage healthcare technology innovators.

Q&A with Dr Mustafa Biviji on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

In the future, radiologists and radiographers could be replaced by intelligent machines. CT and MRI machines of the future would be embedded with AI programs capable of modifying scanning protocols on the fly, depending on the disease process initially identified. Highly accurate automated reports would be produced almost instantly. Machines would prognosticate, identify as yet unknown imaging patterns associated with diseases and may also uncover new diseases.

There will be objectivity to the radiology reports with personal bias of the radiologist no longer a factor. Remote and isolated areas of the world will have an equal access to the best diagnostic information. Coupled with this would be better machine navigation during surgeries or probably even complete robotic surgery based on the imaging patterns identified with AI. Through it all, I believe that radiologists will continue to reinvent themselves.

Photo of Dr Mustafa Biviji with quote

How far away is the industry from realizing these goals, and how does Qure compare to similar solutions that you may have seen/ implemented?

These are initial days and the role of AI in Radiology is currently restricted to assistance. While most solutions talk about simplifying workflows, Qure to the best of my knowledge is the only one talking about automated reports with a remarkable degree of accuracy, thereby opening up exciting new prospects for the future. While the perfect radiology AI may be far in the future, at least a promising beginning has been made.

How does help in your radiology practice? solutions in radiology now include automated head CT reports particularly for trauma and strokes. Reporting for these conditions would earlier have either necessitated a sleepless night or a delay in reporting. Automated reports can now be used to assist residents and help can be sought in case of a doubt or discrepancy. Delayed radiology reports will soon be a thing of the past.

How do you think the Qure’s Chest X-ray solution can help or is helping radiologists in their practice?

Qure’s chest X-ray solution presently is best targeted to a general practitioner in a remote or rural location interpreting his own chest radiographs. Qure CXR could help provide radiologist-level accuracy, previously only available at the larger centers in the bigger cities. Better radiology would lead to better treatment outcomes and obviate the need for patients to travel long distances to seek a diagnosis.

How do you think young radiologists should prepare for AI?

AI in the future will radically modify the role of a radiologist. I predict a significant blurring of the roles of a diagnostic radiologist, surgeon or a physician. The radiologist of the future will have to stop behaving like an unseen backroom doctor and reinvent to participate actively in patient management. Image assisted robotic surgeries and integrated patient care are not too far off in the future.


Interview with Dr Bharat Aggarwal – Artificial Intelligence and the Radiology Workflow

Dr. Bharat Aggarwal is the Director of Radiology Services at Max Healthcare. A distinguished third generation Radiologist, he was previously the promoter and lead Radiologist at Diwan Chand Aggarwal Imaging Research Centre, New Delhi. Dr. Aggarwal is an alumnus of Tata Memorial Hospital, Mumbai, and UCMS, Delhi.

Q&A with Dr. Bharat Aggarwal on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

There is going to be a significant role of AI in the field of imaging, and it will form a critical part of service delivery. There are many gaps in the existing model of service offerings. Some examples where AI will be commonly used include triaging and highlighting critical cases (reporting is done sequentially and a diagnosis requiring urgent intervention could be “at the bottom of the pile”); early diagnosis (pixel resolution of AI vs the human eye); pre-reading to take care of resource crunch, automation in comparisons, objectivization of disease & response to treatment; quality assurance etc.

Photo of Dr Bharat Aggarwal with quote

How far away is the industry from realizing these goals, and how does Qure compare to similar solutions that you may have seen/ implemented?

10-15 years.

How do you think the Chest X-ray solution can help radiologists in their practice?

Triaging normal from abnormal; building efficiency; quality assurance.

What is your advice to young radiologists who are just getting started on their career? How should they think about adopting AI in their practice and should they be doing anything differently to succeed as a radiologist 10-20 years from now?

Yes, adopting AI is a must. Radiologists will not be irrelevant in the world of machines. The role of the radiologists will be to direct research towards clinical gaps, validate AI diagnosis and focus on new problems that will emerge in the AI world. They need to treat AI with healthy competitiveness and build their careers with AI on their team. The opposition is the disease. The goal is health for all.


Interview with Dr Shalini Govil – Training Artificial Intelligence to Read Chest X-Rays

Dr Shalini Govil is the Lead Abdominal Radiologist, Senior Advisor and Quality Controller at the Columbia Asia Radiology Group. Through her years as Associate Professor at CMC Vellore, Dr Shalini Govil has taught and mentored countless radiologists and medical students and continues to do so. Nowadays, she is busy training a new student – an Artificial Intelligence system that is learning to read Chest X-rays. Dr Govil is an accomplished researcher having published 30+ papers, and has won numerous awards for her contributions to Radiology.

Q&A with Dr Shalini Govil on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

Given the accuracy levels being reported across the world for deep learning algorithm diagnosis on imaging, I am sure AI has the potential to emerge as a strong diagnostic tool in the clinical armamentarium.

The only factor that could stand in the way of this progress is the very human fear of being “replaced”, “overtaken” or “made redundant”.

I feel that any crossroad like this in the practice of Medicine is best approached from the point of view of the patient and not from the viewpoint of commerce or market forces.
Medicine is not a “job”…Medicine is “healing”…
Medicine…is a patient trusting you at a vulnerable moment in his/her life.

From that standpoint, it is very simple – if AI is as accurate as a Senior Radiology Resident or even more accurate, let the patient have the benefit of a timely and accurate DRAFT report that can be validated by a physician or radiologist. This would certainly be better than the current practice in many parts of the world where the x-ray is not formally reported by a trained Radiologist or even a Trainee Radiologist.

Photo of Dr Shalini Govil with quote

How far away is the industry from realizing these goals, and how does Qure compare to similar solutions that you may have seen/ implemented?

Even as researchers are racing to study AI performance in increasingly complex pathology, widespread and parallel clinical testing is the need of the hour, to build confidence in Radiology AI and to obtain to a critical mass that will allow the threshold of human fear to be crossed. has come up with a way to “see through the computer’s eyes”. I think this will be a game changer on the road to building confidence in AI. Whenever I have discussed the work I am doing on the use of AI in chest x-ray diagnosis with doctors, they tend to get a glassy look that says, “This is impractical…it’s never going to come into clinical use…”

But the minute they see a chest x-ray with the heatmap shading the abnormality that the AI actually “picked up”…the glassy look turns into one of wonder…because it is exactly what the doctor sees himself! I find this happens with lay people as well, even high school kids!

How do you think the Qure CXR solution can help or is helping radiologists in their practice?

Once the algorithm has been trained on a large number of chest x-rays and robust clinical testing has demonstrated a low false negative rate, I think the best use of the CXR solution would be to run all chest x-rays in our practice through the algorithm and obtain a DRAFT report to ease validation by a Radiologist.

What is your advice to young radiologists who are just getting started on their career? How should they think about adopting AI in their practice and what should they learn to succeed as a radiologist 10-20 years from now?

I would tell young Radiologists that help is on the way…that the days of struggling without a mentor when viewing a difficult case are over…that very soon, an “App” will help them derive a keyword tag to the image that has confounded them and that this keyword will then enable them to research and read and provide an articulate and lucid differential diagnosis.

What should they learn?
They should learn Radiology of course…as in-depth and in-breadth as has ever been done…and they possibly can….
But they should also learn the basics of neural networks, deep learning algorithms and keep abreast of evolving AI.
Oh! and another thing – it might be a good idea to brush up on their 12th grade calculus!


Interview with Dr Bhavin Jankharia – Radiologist Perspective on AI

Dr Bhavin Jankharia is one of India’s leading radiologists, former president of the IRIA as well as a renowned educator, speaker and writer. Dr Jankharia’s radiology and imaging practice, “Picture This by Jankharia”, is known for being an early adopter of innovation and for pioneering new technologies in radiology.

Q&A with Dr Jankharia on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

AI is here to stay and will be a major factor to shape radiology over the next 10 years. It will be incorporated in some form or the other in protocols and workflows across the spectrum of radiology work and across the globe.

Photo of Dr Bhavin Jankharia with quote

You have been an early adopter of AI in your practice. What would your advice be to other institutions globally who are considering incorporating AI into their workflow?

It is about questions that need to be answered. At present, AI is good at solving specific questions or given numerical data from CT scans of the abdomen and pelvis with respect to bone density and aortic size, etc. Wherever there is a need for such issues to be addresses, AI should be incorporated into those specific workflows. We still haven’t gotten to the stage where AI can report every detail in every scan and that may actually never happen.

It may never happen that AI can do what a radiologist does, but looking at the near team (say next 3-5 years), what do you think AI can achieve? (For example, what tasks can it automate? Can it improve reporting accuracy? ) Where will the biggest value addition be?

Its basic value addition will be to take away drudge work. Automated measurements, automated checking of densities, enhancement patterns, perhaps even automated follow-ups for measurements of abnormal areas already marked out on the first scans and the like.

Now that you have experienced AI in practice, how would you differentiate this technology from traditional CAD solutions that have been around for a while?

AI learns much faster and the basic approach is different. To the end user though, it matters not, does it, how we get the answer we want…

You have seen several AI companies in Radiology. What should they be doing differently to reach this goal?

At present, all of AI is problem-solving based. And since each company deals with different problems based on the doctors they work with, this approach is fine. The company that figures out a way to handle a non-problem based approach to basic interpretation of scans, the way radiologists do, will have a head-start.

How do you think the solutions can help or are helping radiologists in their practice?

They are slowly saving time and helping radiologists work smarter and better.

What is your advice to young radiologists who are just getting started on their career? How should they think about adopting AI in their practice and should they be doing anything differently to succeed as a radiologist 10-20 years from now?

I don’t think radiologists per se have to do anything about AI, unless they want to change track and work in the field of AI from a technical perspective. AI incorporation into workflow will happen anyway and like all changes to radiology workflow over the decades, it will become routine and a way of life. They don’t really need to do anything different, except be willing to accept change.


Interview with Dr Vidur Mahajan – Artificial Intelligence and Radiology

As Associate Director at Mahajan Imaging, Dr Vidur Mahajan oversees scientific and clinical research, and pioneers the application of new techniques in radiology. Having collaborated with multiple healthcare tech companies, he is a thought leader in the AI-radiology space. Both doctor and entrepreneur, Dr Vidur Mahajan is passionate about improving access and affordability of high-end medical care across the developing world.

Q&A with Dr Vidur Mahajan on artificial intelligence in radiology.

How do you see Artificial Intelligence in radiology evolving in the future?

AI will keep playing a more important role in radiology as time progresses. The benefits, the way I see them, would be around 2 dimensions: quality and efficiency.

  1. Quality: There are two main ways in which AI will effect the quality of diagnostics, and hence care, delivered to patients:
    • Improving the quality of current radiology practices, i.e. improving the accuracy of radiologists. Bridging the divide between junior / not-specialised radiologists and sub-specialist experienced doctors.
    • They can show completely new features / findings that radiologists were unable to see before. This included the development of quantitative biomarkers and new image acquisition methods.
  2. Efficiency: This is the most obvious, low-hanging fruit for most radiology AI companies. The premise is simple – a radiologist on average, might report 20 MRI scans a day today; AI can help take this up to 50.

How far away is the industry from realizing this future, and how does Qure compare to similar solutions that you may have seen/ implemented?

We work with several AI companies today and are covered by a plethora of non-disclosure agreements, so wont be able to comment on how Qure compares to other solutions. That said, Qure’s pragmatic and user-oriented approach to developing algorithms is a definite plus. Given that Qure is backed by one of India’s best analytics companies, I would be surprised if they don’t end up taking the radiology AI world by storm! On the industry, I strongly feel that it will not be “sudden” transition. The industry will enter into this “AI future” in a step-by-step incremental way – we may not even notice and suddenly we’re surrounded by AI!

How do you think the Qure’s products can help or is helping radiologists in their practice? Would you be able to indicate specific benefits such as increased precision or time saved?

Qure’s chest X-Ray algorithm has the potential to change the entire paradigm of diagnostics in the developing world. The Chest X-Ray is the most commonly prescribed radiology investigation and everyday, thousands of X-Rays go un-reported in the developing world. Qure has the potential to give these patients a proper report and hence impact their treatment outcomes in a very positive way.

Photo of Dr Vidur Mahajan with quote

How well does the Qure solution integrate with your workflow? How can this be made better?

I think Qure has this nailed down completely. The strategy of integrating with Osirix (the world’s most widely used dicom viewer) through an extremely user-friendly and straightforward plugin enables instant reach to radiologists all over the world. Additionally, Qure’s ability to automatically email the AI system’s report to its users should also increase its usability across the world.

What are your expectations from AI in radiology software and how well has Qure met these?

While I think I have answered this in Q-1, the primary, and most important expectation from AI in radiology is a product that works. Simple. I would be very guarded about promoting a product with less that 90% accuracy since I would not want potential users to form a negative opinion about Qure’s product based on their initial experience. The stakes in healthcare are too high and reliability of an algorithm is paramount. Qure is definitely a leader in the AI space globally and I am very proud of the fact that an Indian company is taking the lead in this.