Sample workloads
Recommendation feed
- Store embeddings for each piece of content in a dedicated table.
- Use the
pgvectorcosine_distanceoperator to rank candidates per user. - Persist the recommendation in a cache table so you can audit what was shown to each user.
AI-assisted search
- Use a lightweight embedding model (such as
text-embedding-3-small) off-chain. - Push vectors through the Filehub API or any backend worker.
- Combine similarity search with full-text indexing to keep results relevant and deterministic.
RAG pipelines
Connect your knowledge base by:
- Ingesting documents into Filehub (for binary data) and referencing them from the database tables.
- Chunking the documents and storing embeddings per chunk.
- Serving RAG answers through Postchain REST endpoints guarded by ACL logic in Rell.
Continue experimenting by mapping these steps to your domain-specific schema or by prototyping directly in the CLI cookbook templates.