Changelog
New updates and improvements
All notable changes to Veah are documented in this file. See Veah Website and Veah Twitter for more updates.
[Unreleased]
Added
New fine-tuning pipeline for on-chain smart contract context (beta)
Support for batch queries via
/v1/query(send multiple prompts in one request)Webhook callback enhancements (retry logic, better error responses)
Metrics endpoint expansion: added breakdown by mode (
text/onchain/analytics)
Changed
Default
temperatureparameter decreased from 0.5 to 0.3 for more deterministic responsesStreaming response behavior refined: initial token delay minimized
SDKs updated: improved error wrapping and unified response schema
Fixed
A bug with missing timestamp in
onchainmode responsesRate-limit edge case when switching between free and pro tiers
Memory leak when reusing API client instance in Node.js environments
[v0.3.0] — 2025-10-01
Added
Support for
mode = analyticsquery mode (ecosystem-level insights)New
/v1/modelsendpoint to list available model variants/v1/metricsendpoint for live usage and performance metricsPublic API documentation with domain updated to
veahllm.com
Changed
Renamed API base URL from
api.veah.aitoapi.veahllm.comTwitter/social links updated to
x.com/veahllmRate limits tightened: free tier capped at 20 req/min, pro at 200 req/min
Fixed
Incorrect error code for unauthorized requests (now returns 401 consistently)
Minor JSON parsing bug when reading large numeric values in responses
[v0.2.0] — 2025-08-15
Added
Initial public API
POST /v1/querywithtextandonchainmodesJavaScript (Node.js) SDK version 0.1.0
Python client library v0.1.0
Open metrics dashboard (internal version)
Changed
Internal inference engine upgraded — latency optimized by ~20%
Default
max_tokensreduced from 1024 to 512
Fixed
Edge-case error when prompt is empty string
Stabilized authentication header parsing
[v0.1.0] — 2025-05-30
Added
Project initialization: core model server and inference pipeline
Basic prompt processing and response generation
Open-source repo and MIT license
README, basic documentation, and website launch at
veahllm.com
Fixed
Initial minor bugs in tokenization logic
Startup scripts and environment variable loading
Notes / Upgrade Guide
When upgrading from v0.2.x to v0.3.x, update your client config to point to
api.veahllm.com.The
analyticsmode is new — if you rely on onlytextoronchain, no changes needed.For SDK users: you may need to update to the latest versions to get proper error handling and schema changes.
Check the
[Unreleased]section for upcoming features and breaking changes before your next upgrade.
Last updated