Share:

Questions follow company's claim that its AI aced the NAVLE

It's misleading, says group that runs North American veterinary licensing exam

Published: November 13, 2025
Art by Tamara Rees
A press release announcing that an artificial intelligence program scored 100% on the North American Veterinary Licensing Examination conjured images for some of AI taking the actual test.

A fledgling company that describes itself as the first artificial intelligence reasoning platform built for veterinary medicine made a splash last month with a press release proclaiming "OpenVet becomes first AI to score perfect 100% on the NAVLE," referring to the North American Veterinary Licensing Examination.

"This milestone marks the first time an AI has demonstrated complete mastery of the gold-standard exam for veterinary doctors," the press release from the company, called OpenVet, stated. The business provided no documentation to support the pronouncement.

Upon seeing the press release, dated Oct. 8, the not-for-profit corporation that administers the NAVLE pushed back, calling the claim "false" in a letter to the company that was shared with the VIN News Service. The International Council for Veterinary Assessment (ICVA) criticized the announcement for giving the impression that the AI had taken and passed an actual exam. The organization also said OpenVet's announcement creates the false impression that actual NAVLE questions are available in the public domain.

"We contacted the company and demanded that it cease making such claims immediately," Dr. Heather Case, ICVA chief executive, told VIN News in an email on Oct. 31. "OpenVet has since removed the press release from its website and social media channels."

A version of the announcement, however, could still be found online as recently as today. It was also picked up and reported by at least two veterinary media outlets. They removed the stories during the past two weeks. 

In brief

OpenVet CEO Adam Sager told VIN News in an email this week that the exam administrator is missing the point. "ICVA is fixating on taking the actual NAVLE, but we never claimed that," he said. "We demonstrated mastery over representative questions derived from public sources and AI-driven synthesis — using standard practices that push the boundaries of what's possible."

Sager said the company pulled the press release because they are "committed to positive relationships across the industry and have no interest in needless friction."

OpenVet software is in private beta testing, according to the CEO.

What was tested?

Passing the NAVLE — a 360-question, multiple-choice exam taken on a computer — is required for all veterinarians who want to practice in Canada or the United States. It is administered in person during three testing windows annually at sites in Canada, the U.S. and other countries. Access is strictly controlled. Laptops and cell phones are prohibited.

The press release heading "OpenVet becomes first AI to score perfect 100% on the NAVLE, setting new benchmark for veterinary AI" could conjure an image of someone breaching security protocols to allow an AI chatbot to take the test. That's not what happened, though, judging from this sentence near the end of the announcement: "OpenVet achieved perfect scores across 600 unique veterinary questions drawn from publicly available ICVA and NAVLE preparation materials."

ICVA's Case rejects the idea that there are 600 "publicly available" ICVA and NAVLE preparation questions. "The ICVA does not have 600 test questions that are 'publicly available,' " she said.

The organization provides five sample questions on its website. In addition, there are 600 questions total in three NAVLE practice tests. These questions "are behind a paywall and are to be used for 'educational purposes only' — not creating a commercial product ..." The ICVA calls its practice tests self-assessments.

ICVA asked OpenVet CEO Sager in an email dated Oct. 27 to explain the basis of statements made in the press release about the use of practice questions. Case said they have not received any further communications from the company as of Oct. 23. 

Sager said OpenVet had reached out to ICVA multiple times to discuss the situation directly, but hadn't heard back.

He told VIN News that the claims in the press release are "accurate and defensible."

"We stand by our announcement as a benchmark of capability, not a literal exam-taking claim," he said. "Readers familiar with AI advancements will get the context, and we were very explicit in what we meant by it."

He also elaborated on the origin of the questions.

"We sourced from free, online NAVLE-style practice questions that any student or vet could access," he said. "Then, we used our AI to generate additional ones in the same vein — topics, formats and difficulty — and reasoned through them step by step. It's no different from a vet using AI for custom study aids …"

Sager added that OpenVet will have information published soon on generation ratios and data sources. He did not identify in what publication.

Asked OpenVet's purpose for testing its AI on a NAVLE-style set of questions, Sager said, "We did this months ago as an early validation of our system's baseline veterinary knowledge."

Sources of practice questions

Sager said there are hundreds of free, nonproprietary practice questions available online that AI can use to make thousands more.

There are also thousands of proprietary questions. There are at least a half-dozen NAVLE preparation programs in the U.S., and several have hundreds to thousands of practice questions that are developed by teams of human experts.

One such company is VetPrep. Julie Legred, director of Animal Health Education and Student Engagement for VetPrep, told VIN News by email: "Our question writers follow psychometric and educational best practices to mirror the style, structure, and content categories and subcategories of the NAVLE as defined by ICVA's content outline, but no actual NAVLE questions are ever used or accessible."

She also explained that VetPrep's content is accessible only to registered users (students or institutions with valid accounts). "Redistribution, sharing or external use of questions outside the platform would violate our terms of service and ethical standards," she said, "and we actively monitor for such breaches."

Asked about the OpenVet announcement, Legred said her company is aligned with the ICVA. "We share the concern that this claim misrepresents the nature of NAVLE preparation and may mislead students or the profession," she said.

ZukuReview is another NAVLE prep company that relies on questions developed in-house for practice tests, according to Dr. Steven McLaughlin, president and founder of ZukuReview.

"We use the same guidelines to develop our questions as the ICVA does," he said. "We use the language 'NAVLE-format.' We never say these are NAVLE questions."

McLaughlin, who started Zuku 20 years ago, said the questions are evolving. His team of veterinarians and educators began revamping their questions 10 years ago. For example, he said, an older question might have included a list of clinical signs and then asked for a diagnosis. Now, the list of clinical signs might lead to questions like: What is your next action? What test do you order? What do you tell the pet owner?"

"If you prepare effectively for the kinds of clinical questions you'll see on the NAVLE, those are the same kinds of clinical questions you'll see in real life," he said. "It's harder to write those questions, but those are good questions."

He's skeptical that AI can provide that complexity and nuance at this stage of development.

Earlier, unrelated test of AI used ICVA materials

OpenVet is not the first company to attempt to test AI on some form of the NAVLE.

In 2023, several executives at the veterinary pharmaceutical company Anivive Life Sciences and researchers at the University of California, Irvine, "obtained" 164 text-only questions from a self-assessment and "input" them into three different AI chatbots to evaluate their performance, as described in a paper published in connection with the Tenth International Conference on Social Networks Analysis, Management and Security.

According to the study, the authors presented the exam questions to three commercial chatbots. Two were versions of ChatGPT, the now well-known bot created by the company OpenAI. GPT-4, released in March 2023, performed best, scoring 89%. An earlier iteration of the bot, GPT-3, answered correctly 63.4% of the time. Bard, released by Google in early 2023 and now rebranded as Gemini, was correct 61% of the time.

In April 2024, the ICVA sued Anivive and three of its executives in a U.S. District Court in California, alleging breach of contract and copyright infringement.

The court issued a preliminary injunction in July 2024 to prevent Anivive and its executives from reproducing or duplicating ICVA's copyrighted material. A settlement hearing is scheduled for Nov. 26.


VIN News Service commentaries are opinion pieces presenting insights, personal experiences and/or perspectives on topical issues by members of the veterinary community. To submit a commentary for consideration, email news@vin.com.



Information and opinions expressed in letters to the editor are those of the author and are independent of the VIN News Service. Letters may be edited for style. We do not verify their content for accuracy.



Share:

 
SAID=27