
A growing number of doctors across the United States are turning to artificial intelligence tools to help guide clinical decisions, review medical research, and manage patient care—often without the patients even realizing it.
One AI platform in particular, called OpenEvidence, has quickly become one of the most widely used medical AI tools in the country, according to a report by NBC News. The company claims that nearly 65% of US physicians used the platform during nearly 27 million clinical encounters in April alone.
The rise of this technology is reshaping the way doctors access medical knowledge, diagnose conditions and make treatment decisions, while fueling debate about patient safety, privacy and doctors’ over-reliance on AI systems.
What is OpenEvidence?
OpenEvidence is an AI-based medical search and decision support platform designed specifically for healthcare professionals.
Unlike generic chatbots, the platform searches peer-reviewed medical journals, treatment guidelines and clinical databases to answer doctors’ questions in real time.
NBC News reported that doctors use it to:
– Review treatment options
– Identify side effects of medications
-Summary of medical research
-Prepare for medical licensing exams
Doctors access the service through a website or mobile app after verifying their professional details using a government-issued medical identification number.
“Our commitment is that core OpenEvidence will always be free for users,” OpenEvidence CEO Daniel Nadler told NBC News.
Rapid acceptance among physicians
Doctors interviewed by NBC News described OpenEvidence as almost ubiquitous in hospitals and clinics.
“Everybody uses it,” said Dr. Anupam Jena, a physician at Massachusetts General Hospital and a professor at Harvard.
“Its growth has really been exponential,” he added.
According to news reports, OpenEvidence is now used by roughly 650,000 physicians in the United States and another 1.2 million internationally.
Many doctors have reported that the tool helps them quickly answer specialized medical questions that would otherwise require lengthy searches in journals or traditional reference systems.
Dr. Paul Sax of Brigham and Women’s Hospital told the paper that OpenEvidence’s search capabilities “border on the miraculous.”
“The process of finding answers is seamless,” Sax said.
Read also | Google is turning Android into an AI agent system with Gemini Intelligence
How doctors use this tool
NBC News spoke with doctors across a variety of specialties who described using OpenEvidence during actual patient encounters.
One doctor reportedly used it to determine whether a patient’s dangerously low potassium levels were a normal side effect of medication.
Another doctor used the platform to confirm that a CT scan – not an X-ray – is needed to properly diagnose a spinal fracture.
The system generates answers by summarizing published medical research and linking directly to peer-reviewed studies and clinical guidelines.
Many physicians have reported that this tool saves significant time compared to traditional medical reference platforms such as UpToDate.
Fears of AI errors and ‘hallucinations’
Despite its popularity, experts warned NBC News that OpenEvidence is not infallible.
Like other artificial intelligence systems, it may occasionally produce inaccurate information, exaggerate conclusions from limited studies, or generate incomplete answers.
Some doctors interviewed said they routinely verify studies cited by the AI before acting on its recommendations.
“I usually click through to the referenced documents,” said emergency room physician Dr. Kassel Galata.
Others have expressed concern that junior doctors and medical students could become too dependent on AI tools and lose critical diagnostic reasoning skills over time.
Privacy and data concerns
OpenEvidence claims to be compliant with HIPAA, the US federal health privacy law, and has safeguards in place to handle protected patient information.
However, some hospital systems remain cautious.
NBC News reported that MaineHealth is currently advising doctors not to enter protected health information into the platform.
Several doctors said they use patient details such as age, gender and medical history when searching, avoiding names or direct identifiers.
Read also | I asked ChatGPT how to build a startup at ₹ 10 LPA: AI didn’t sugarcoat anything





