Standard vector embeddings search vs Hypothetical Document Embeddings (HyDE diagram)

Revolutionizing Search: How Hypothetical Document Embeddings (HyDE) Can Save Time and Increase Productivity

Tools like the OpenAI Embeddings API are changing the way search is performed. Instead of matching keywords, a language model converts text into vectors. These vectors represent the text using hundreds or even thousands of dimensions mathematically. This way, when searching, matches can be made based on the underlying intent of the query instead of just keywords.

This is the search the authors of the Precise Zero-Shot Dense Retrieval without Relevance Labels sought to improve with their process called Hypothetical Document Embeddings (HyDE).

The HyDE hypothesis is that the document search would yield better results using hypothetical answers than the question itself.

How it works

The HyDE method is a way to find information in a large set of documents using artificial intelligence. It starts by having a Large Language Model (LLM), like ChatGPT, create a document based on a specific question or topic. This document may contain some false information, but it also has relevant patterns that can be used to find similar documents in a trusted knowledge base.

Next, another AI model is used to turn the created document into an embedding vector, which is then used to find other documents similar to the one the AI model created.

What are the implications?

HyDE can enable language models in more sensitive applications since the search results are returned directly from a trusted source. This process prevents “hallucinations” by the LLM from being returned to the user. This can be useful in cases where exact measurements are necessary or incorrect answers could prove catastrophic, like in medicine.

Better searching of internal documents can save thousands of hours and maximize productivity. The HyDE paper focuses on generating hypothetical answers to questions using the LLM. However, it’s easy to imagine using the LLM to augment similarly incomplete content to find relevant material in applications outside of search, like writing content or programming.

Also, the search results don’t have to be the end of the road. For example, this search can be used behind the scenes to enable a chat interface, like ChatGPT, that utilizes the full context of your knowledge base in its answers without fine-tuning or exceeding the token limit!

What’s next?

All the necessary language models for building a HyDE search system can be found at OpenAI.

Do you need help unlocking the full potential of HyDE and other cutting-edge AI technologies?

Understand which AI can be the biggest boost for your business—Schedule a consultation to learn more!

Leave a Comment Cancel Reply