Skip to main content

Sample workloads

Recommendation feed

  1. Store embeddings for each piece of content in a dedicated table.
  2. Use the pgvector cosine_distance operator to rank candidates per user.
  3. Persist the recommendation in a cache table so you can audit what was shown to each user.
  • Use a lightweight embedding model (such as text-embedding-3-small) off-chain.
  • Push vectors through the Filehub API or any backend worker.
  • Combine similarity search with full-text indexing to keep results relevant and deterministic.

RAG pipelines

Connect your knowledge base by:

  • Ingesting documents into Filehub (for binary data) and referencing them from the database tables.
  • Chunking the documents and storing embeddings per chunk.
  • Serving RAG answers through Postchain REST endpoints guarded by ACL logic in Rell.

Continue experimenting by mapping these steps to your domain-specific schema or by prototyping directly in the CLI cookbook templates.