
Anthropic introduced new application programming interface (API) capabilities on Thursday to enable developers to root responses generated by artificial intelligence (AI) models. This feature is called a reference, allowing developers to limit output generation from the Claude family of AI models to source documents. This aims to improve the reliability and accuracy of AI-generated responses. AI companies have provided the feature for companies such as Thomson Reuters (For Cocounsel platform) and Endex. It is worth noting that the feature can be done at no extra cost.
Humans introduce new grounding functions
Generated AI models are often prone to errors and hallucinations. This happens because they have to browse a large number of data sets to find a response to a user query. Adding web search to equations only makes large language models (LLMS) trickier to avoid using relatively basic search generation (RAG) mechanisms to avoid inaccurate information.
AI companies also build professional tools that often limit data access to LLM to improve accuracy and reliability. Some examples of such tools include Gemini in Google Docs, AI-driven writing aids for Samsung and Apple smartphones, and PDF analytics tools in Adobe Acrobat. However, because developers build a wide range of tools with different data requirements, it is impossible to create such layers in the API.
To solve this problem, humans introduced the reference function of their API. This feature is detailed in a newsroom post, making Claude’s reply in the source document. This means that the Claude AI model can provide detailed references to the exact paragraphs and sentences to generate output. AI companies claim that the tool will make AI-generated responses easier to verify and more trustworthy.
This way, the user can add the source document to the context window, and Claude will automatically reference the source in the output in the output source. As a result, developers will not have to rely on complex tips to require Claude to include source information, which the company believes is an inconsistent and cumbersome approach.
Anthropic claims that by citing, developers will be able to easily build AI solutions for document summary, tools to answer complex queries based on long documents, and customer support systems.
It is worth noting that the company said that citations use human standard token-based pricing models, and users will not pay for output tokens that return cited text. However, there may be additional costs for other input tokens used to process the source document. Currently, the new Claude 3.5 sonnet and Claude 3.5 Haiku models are available for reference.