k-NN Plugin

What is it

The k-NN Plugin extends OpenSearch by adding support for a vector data type. This allows for a new search which utilises the k-Nearest Neighbors algorithm. That is, providing a vector as a search input, OpenSearch can return a number (k) of nearest vectors in an index. This adds many more use cases for performing categorisation with OpenSearch when the data can be converted to a vector.

How to provision

The k-NN can be selected as a plugin. In the console just select the checkbox in the plugin section.

Native Libraries vs Apache Lucene

The k-NN plugin has multiple options for the internal representation of the vectors. The 2.x visions support an Apache Lucene native indexing utilising Lucene 9 features. This is the recommended choice as it is more natural for OpenSearch Architecture. Alternative there are 2 native libraries which build out data structures in memory outside of the JVM. These can provide performance benefits and give more control of the internals at the cost of additional complexity in regard to memory management.

 

You specify the engine at indexing time. Here is an example for using Apache Lucene as the engine.

 

Pointers and Tips

Size Calculation

If choosing to use the native libraries additional thought needs to be given to the size of memory needed for your nodes. Your cluster will be configured with 50% of RAM being allocated to OpenSearch. The memory cache used by nativen libraries takes another 25% of the RAM. So memory is even more important for these clusters. There are formulas for the rough estimate for both the HNSW and IVF approaches (See doco).

HNSW

where M = The number of bidirectional links (See doco)

IVF

where nlist = Number of buckets to partition vectors into (See doco). 

Stats

The stats call is your friend in giving you insights into the behaviour of your indices with k-NN, if you are using the native library engines. You can see which native libraries are active and how much memory your indexes are using. The memory usage percentage can tell you what amount of the memory cache is used per index. Combine this with shard count on the node to work out the per shard cost. 

Result

 

Warming k-NN Indices

If using native libraries, graphs need to be built in memory for a shard before it can return search results. This can cause lag which can be prevented by warming revenant indices. It is important to warm an index before calculating how much memory it will use. It is also important to consider that inactive shards can be removed from memory under a breaker condition or idle setting for the memory cache. 

Result

Refresh Interval

The index refresh intervals can have a big impact on both indexing and search performance of the k-NN plugin. Queries are per segments so small segments created with low refresh intervals can lead to added search latency. Disabling the refresh entirely will speed up indexing in general. So it is recommended to turn off where possible, such as during a bulk load of indices using the vectors data type.

Turn Off

Result

Turn back on for 60 second interval

Result

 

Replicas

Disabling replicas during a bulk load will speed up indexing time. You can then add additional replicas once the load is complete. Each replica adds to the memory cost, so be careful when adding them while using the native libraries as your indexing engine. You can use the warmup and then stats calls mentioned earlier to see if you have enough memory to add replicas.

Result

 

Further Reading

More documentation on these and other aspects of the k-NN plugin can be found in the OpenSearch projects documentation.

 

By Instaclustr Support
Need Support?
Experiencing difficulties on the website or console?
Already have an account?
Need help with your cluster?
Contact Support
Why sign up?
To experience the ease of creating and managing clusters via the Instaclustr Console
Spin up a cluster in minutes