Changelog

New updates and improvements

Improved

  • Enhanced Streaming Responses – Reduced latency on streamed results, providing near-instant feedback for on-chain queries.

  • Optimized On-Chain Reasoning – Veah now better parses Solana transaction data and validator statistics for more accurate insights.

  • SDK Usability Enhancements – Unified response schema, better error handling, and easier integration for JavaScript and Python developers.

  • Batch Query Support – Send multiple prompts in a single API request to improve efficiency and reduce network overhead.

  • Improved Metrics Endpoint – Added detailed breakdowns for each query mode (text, onchain, analytics) with live performance stats.

  • Fine-Tuning Pipeline Updates – On-chain context fine-tuning is now faster and more memory-efficient.

  • Developer Dashboard UX – Updated web console for clearer API key management, usage stats, and model selection.

Fixed

  • Timestamps in On-Chain Mode – Responses now always include a timestamp, fixing missing data in onchain queries.

  • Authentication Consistency – Unauthorized requests now consistently return a 401 error.

  • Memory Leak in Node.js SDK – Resolved a memory leak when reusing API client instances.

  • Rate Limit Edge Cases – Fixed issues when switching between free and pro tiers to prevent accidental request failures.

  • Empty Prompt Handling – API no longer crashes or returns malformed responses for empty strings.

  • JSON Parsing Issues – Fixed errors parsing large numeric values in API responses.

  • Startup Script Stability – Environment variable loading and server initialization now work reliably across platforms.


All notable changes to Veah are documented in this file. See Veah Website and Veah Twitter for more updates.

[Unreleased]

Added

  • New fine-tuning pipeline for on-chain smart contract context (beta)

  • Support for batch queries via /v1/query (send multiple prompts in one request)

  • Webhook callback enhancements (retry logic, better error responses)

  • Metrics endpoint expansion: added breakdown by mode (text / onchain / analytics)

Changed

  • Default temperature parameter decreased from 0.5 to 0.3 for more deterministic responses

  • Streaming response behavior refined: initial token delay minimized

  • SDKs updated: improved error wrapping and unified response schema

Fixed

  • A bug with missing timestamp in onchain mode responses

  • Rate-limit edge case when switching between free and pro tiers

  • Memory leak when reusing API client instance in Node.js environments


[v0.3.0] — 2025-10-01

Added

  • Support for mode = analytics query mode (ecosystem-level insights)

  • New /v1/models endpoint to list available model variants

  • /v1/metrics endpoint for live usage and performance metrics

  • Public API documentation with domain updated to veahllm.com

Changed

  • Renamed API base URL from api.veah.ai to api.veahllm.com

  • Twitter/social links updated to x.com/veahllm

  • Rate limits tightened: free tier capped at 20 req/min, pro at 200 req/min

Fixed

  • Incorrect error code for unauthorized requests (now returns 401 consistently)

  • Minor JSON parsing bug when reading large numeric values in responses


[v0.2.0] — 2025-08-15

Added

  • Initial public API POST /v1/query with text and onchain modes

  • JavaScript (Node.js) SDK version 0.1.0

  • Python client library v0.1.0

  • Open metrics dashboard (internal version)

Changed

  • Internal inference engine upgraded — latency optimized by ~20%

  • Default max_tokens reduced from 1024 to 512

Fixed

  • Edge-case error when prompt is empty string

  • Stabilized authentication header parsing


[v0.1.0] — 2025-05-30

Added

  • Project initialization: core model server and inference pipeline

  • Basic prompt processing and response generation

  • Open-source repo and MIT license

  • README, basic documentation, and website launch at veahllm.com

Fixed

  • Initial minor bugs in tokenization logic

  • Startup scripts and environment variable loading


Notes / Upgrade Guide

  • When upgrading from v0.2.x to v0.3.x, update your client config to point to api.veahllm.com.

  • The analytics mode is new — if you rely on only text or onchain, no changes needed.

  • For SDK users: you may need to update to the latest versions to get proper error handling and schema changes.

  • Check the [Unreleased] section for upcoming features and breaking changes before your next upgrade.


Last updated