X

Frontend-only live semantic search with transformers.js. GitHub

Semantic search right in your browser! Calculates the embeddings and cosine similarity client-side without server-side inferencing. Your data is private and stays in your browser.
Just copy & paste any text in the text area and hit Find. Set a different chunk size for finer or coarser search.
Large books can be indexed too and searched in less than 2 seconds!
Examples: The Bible (en), Les Misérables (fr), Das Kapital (de), Don Quijote (es), Divina Commedia (it), Iliad (gr), IPCC Report 2023 (en). Full catalogue with pre-indexed examples on Huggingface. Contribute the indices of the documents you indexed or open a request on GitHub with a source URL.

Model Selection
Chunking Settings
App Settings
Include Words
Exclude Words
Import Local Index File
Import Remote Index File
Export Index File
Style Preferences
""
""

    Dimensionality Reduction (New🔥)

    Run a search as usual or load an index. Then hit "Dim-Reduction" in the advanced settings. More iterations yield better results but take more time to compute. If the points are too small increase the radius. Using a fast wasm implementation of Barnes-Hut tSNE (wasm-bhtSNE).


    Chat (Retrieval Augmented Generation, RAG)

    Enter a question to be answered. The model is automatically prompted with the top search results in the form:
    "Based on the following input, answer the question: [your question] [top search results]".
    If you encounter errors, the input is probably too long (either too many or too long results or too long prompt). Also, make sure to check the right prompting style! Xenova/LaMini-Flan-T5-783M is by far the best quantized model currently available and delivers good results while the others produce nonsense in most cases. At some point Falcon & Mistral/Zephyr models will probably become available here.
    Attention: Loads very large models with more than 1.5Gb (!) of resources.

    ""
    ""

    Summary (Retrieval Augmented Generation, RAG)

    Summarizes the top search results. Works best with non-fictional texts and longer text chunks (>200 chars).
    Attention: Loads very large models with hundreds of MB!


    ""
    ""