AI Inference extension
Chromia empowers developers by integrating AI inference capabilities directly into decentralized applications (dapps). This integration enables intelligent, real-time decision-making within the on-chain environment. By leveraging externally trained models for optimized inference, developers can seamlessly embed AI-driven logic into their dapps while upholding the core principles of blockchain: trustlessness and transparency. This fusion of blockchain security with artificial intelligence unlocks a new era of responsive and adaptive applications.
Configuring the blockchain in the dapp
To utilize this extension, configure your blockchain operations and queries. Here’s a sample configuration:
blockchains:
ai_inference_dapp: # Ensure this name is unique. Rename it if you encounter duplicate issues during deployment.
module: main
config:
sync_ext:
- "net.postchain.hybridcompute.HybridComputeSynchronizationInfrastructureExtension"
gtx:
modules:
- "net.postchain.hybridcompute.HybridComputeGTXModule"
hybridcompute:
engine: "net.postchain.gtx.extensions.ai_inference.AiInferenceComputeEngine"
load_timeout_seconds: 600 # Timeout in seconds for initial model loading
compute_timeout_seconds: 600 # Timeout in seconds for each inference
ai_inference:
model_url: "https://djl-misc.s3.amazonaws.com/test/models/gpt2/gpt2_pt.zip" # Specify model URL here
tokenizer_name: gpt2 # Specify tokenizer name here
max_sequence_length: 60 # Max sequence length, see https://javadoc.io/static/ai.djl/api/0.32.0/ai/djl/modality/nlp/generate/SearchConfig.html#setMaxSeqLength(int)
max_length: 512 # Length to truncate and/or pad sequences to, see https://javadoc.io/static/ai.djl.huggingface/tokenizers/0.32.0/ai/djl/huggingface/tokenizers/HuggingFaceTokenizer.Builder.html#optMaxLength(int)
Integrating with Rell
Incorporate the AI Inference extension library into your configuration as follows:
libs:
hybridcompute:
registry: https://gitlab.com/chromaway/postchain-chromia
path: chromia-infrastructure/rell/src/lib/hybridcompute
tagOrBranch: 3.27.6
rid: x"DF9EF3C9333D497F501B2D230CA0B9FD12F5288FA4E1D9F5DDBE916506543887"
insecure: false
ai_inference:
registry: https://gitlab.com/chromaway/core/ai-inference-extension
path: rell/src/lib/ai_inference
tagOrBranch: 0.1.10
rid: x"0B9D2011C51E750CC9639B3EA285EC6F92BF947B4445BDA54397A362627CEE2E"
insecure: false
compile:
rellVersion: 0.14.9
Next, run chr install
.
Minimal implementation example
Here's a minimal implementation example in Rell:
module;
import ai: lib.ai_inference;
operation submit_inference_request(id: text, prompt: text) {
ai.submit_inference_request(id, prompt);
}
query fetch_inference_result(id: text): ai.inference_result? = ai.fetch_inference_result(id);
@extend(ai.on_inference_result)
function (id: text, result: ai.inference_result) {
if (result.result != null) {
log("Inference with id %s finished successfully.".format(id));
} else {
log("Inference with id %s failed: %s.".format(id, result.error));
}
}
Deployment to testnet
Leasing a container
If you are unfamiliar with leasing a container, follow these steps.
Ensure you select the AI Inference extension
when leasing the container.
Deployment
Follow the deployment steps.
You can find the BRID for the Testnet Directory Chain in the explorer.
The deployment section in your chromia.yml
will look like this:
deployments:
testnet: # Deployment Target name
brid: x"6F1B061C633A992BF195850BF5AA1B6F887AEE01BB3F51251C230930FB792A92" # Blockchain RID for Testnet Directory Chain
url: https://node0.testnet.chromia.com:7740 # Target URL for one of the nodes in Testnet
container: 4d7890243fe710...08c724700cbd385ecd17d6f # Replace with your container ID (Example: container: 15ddfcb25dcb43577ab311fe78aedab14fda25757c72a787420454728fb80304)
chains:
ai_inference_dapp: x"9716FBFF3...663FEC673710" # Replace with the actual BRID after the dapp gets deployed. You can find it in the terminal during deployment.
Simple test
In the examples provided, ai_inference_dapp
and testnet
refer to the
deployment parameters.
To send a request, use:
chr tx -bc ai_inference_dapp -d testnet submit_inference_request 'a1' 'Ai inference started?'
To get a response, execute:
chr query -bc ai_inference_dapp -d testnet fetch_inference_result id=a1
Output:
[
"error": null,
"result": "Ai inference started?\n\nThe answer is yes.\n\nThe first step is to find the source of the inference.\n\n
]
Explore the example source code.
You can access the source code and additional information about the AI Inference extension in the official repository.