Skip to content

Chuks v0.0.7 — The Ecosystem Release

Chuks v0.0.7 is the biggest release yet — 80 files changed, +7,806 lines. This release adds a full package manager with supply chain security, a watch mode for development, route groups with middleware, 83 new math functions, and a complete generic monomorphization system in the AOT compiler.

Chuks now has a built-in package manager. No separate tool needed — it’s part of the chuks CLI.

Terminal window
chuks add chuks_redis # Add a package to your project
chuks install # Install all dependencies from chuks.json
chuks remove chuks_redis # Remove a package
chuks update chuks_redis # Update a package to latest
chuks info chuks_redis # Show package details
chuks list # List installed packages
chuks publish # Publish your package to the registry

Packages are declared in chuks.json and installed into a local chuks_packages/ directory. The package manager resolves dependencies from the Chuks package registry, validates versions, and manages the full lifecycle — add, install, remove, update, publish.

Version requirements in chuks.json support the full semver constraint syntax:

{
"dependencies": {
"chuks_redis": "^1.2.0",
"chuks_kafka": "~2.1.0",
"chuks_csv": ">=1.0.0"
}
}
ConstraintMeaning
^1.2.3>=1.2.3, <2.0.0 (compatible with major)
~1.2.3>=1.2.3, <1.3.0 (compatible with minor)
>=1.0.0Any version 1.0.0 or higher
<2.0.0Any version below 2.0.0
1.2.3Exact version only
*Any version

Caret constraints handle 0.x versions correctly — ^0.2.3 resolves to >=0.2.3, <0.3.0 since pre-1.0 versions treat the minor as the compatibility boundary.

Resolution happens server-side. The CLI sends the constraint to the registry, which returns the best matching version. This means the registry can account for yanked versions and package status without the CLI needing that logic locally.

When you install a package, the package manager automatically discovers and installs its dependencies — and their dependencies, recursively. A breadth-first resolution pass walks the full dependency tree, detects cycles, and locks every transitive dependency into chuks.lock.

Each transitive dependency requires the same permission consent as a direct dependency. Nothing gets installed silently.

Every chuks.lock file is signed with an HMAC-SHA256 signature using a machine-local key stored at ~/.chuks/lockfile.key. The key is generated automatically on first use (256-bit random, owner-only permissions).

When chuks install runs, it verifies the lockfile signature before proceeding. If the lockfile has been manually tampered with — or modified outside the CLI — the signature check fails and installation is blocked:

⚠ lockfile signature mismatch — chuks.lock may have been tampered with
Run 'chuks install --resign' to re-verify and re-sign.

This prevents an attacker from editing chuks.lock to swap a trusted package version for a compromised one. The signing is non-fatal if the key file is unavailable (e.g., fresh CI environment), but emits a warning.

The package manager enforces a multi-layered security model at every stage — install, build, and runtime.

Every package declares what system capabilities it needs (file system access, network binding, database access, process execution, etc.). When you install a package for the first time, the CLI shows an interactive consent prompt:

📦 chuks_redis v1.3.0
Status: active
Integrity: sha256:a1b2c3d4e5f6...
Permissions requested:
✔ net.connect Connect to external services
✔ net.bind Listen on network ports
⚠ Elevated:
⚠ db.access Full database control
Outbound domains:
→ redis.example.com
Accept? [y/n]

Elevated permissions (file system writes, process execution, database access, install scripts) are marked with ⚠️ so they stand out. You approve once; the decision is recorded in chuks.lock.

Approved permissions aren’t just informational — they’re enforced.

At compile time: before building your project, the compiler scans every installed package to verify it only imports standard library modules consistent with its declared permissions. If chuks_redis declares net.connect but also imports std/fs, the build fails:

✗ permission violation — chuks_redis uses std/fs but fs.read was not granted

At runtime (AOT): AOT-compiled binaries embed a startup check that re-validates all package permissions when the binary launches. Even if the compile-time check is bypassed, the binary refuses to run if permissions don’t match:

✗ runtime permission violation — package chuks_redis uses net.bind
but none of its required permissions were granted

This is defense-in-depth. The lockfile is the source of truth, and enforcement happens at two independent stages.

Every package installation includes a SHA-256 content hash verification against the registry. The registry returns the expected hash for the exact version being installed, and the CLI compares it against the downloaded content:

✗ integrity check failed for chuks_redis@1.3.0
Expected: sha256:a1b2c3d4...
Got: sha256:x9y8z7w6...
This could indicate a supply chain attack.

If the hash doesn’t match, installation is blocked. This catches scenarios where package contents are modified after publication — whether by a compromised registry, a man-in-the-middle, or a storage-level attack.

When you run chuks update, the package manager compares the new version’s permissions against what you previously approved. If the update adds new permissions, you’re prompted to re-consent before the upgrade proceeds:

📦 chuks_redis 1.3.0 → 2.0.0
New permissions requested:
⚠ fs.write Write to the file system
⚠ proc.exec Execute system commands
Accept? [y/n]

This prevents a trusted package from silently gaining new capabilities in a minor or major update.

The registry assigns a status to every package: active, under_review, rejected, or yanked. The CLI checks this status before installation and blocks anything that isn’t active:

✗ package chuks_badpkg has status 'rejected' — cannot install

Yanked versions are excluded from version resolution entirely.

The CLI detects whether it’s running in an interactive terminal or a CI/CD pipeline. In non-interactive mode (no TTY), chuks install requires an existing chuks.lock — it never prompts for permission consent. This prevents CI pipelines from hanging on a [y/n] prompt and ensures that production installs always use pre-approved lockfiles.

Chuks now supports API tokens for automated publishing from CI/CD pipelines. Instead of interactive OAuth login, generate a scoped, expiring token from the dashboard and use it in your pipeline.

Tokens are scoped — you choose exactly what each token can do:

ScopeWhat it allows
packages:readList packages, view download stats
packages:publishPublish new package versions
packages:yankYank and restore versions
packages:permissionsUpdate package permissions

Pass a token directly:

Terminal window
chuks publish --token chuks_pk_a1b2c3d4...

Or set an environment variable:

Terminal window
export CHUKS_TOKEN=chuks_pk_a1b2c3d4...
chuks publish
name: Publish Package
on:
push:
tags: ["v*"]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: curl -fsSL https://chuks.org/install.sh | bash
- run: chuks publish --token ${{ secrets.CHUKS_TOKEN }}

Security: tokens are stored as SHA-256 hashes — the raw token is shown once at creation and never stored. Tokens can be revoked instantly from the dashboard, and each token shows its last-used date for auditing.

See the full CI/CD Publishing guide for GitLab CI, Bitbucket Pipelines, and security best practices.

chuks watch monitors your source files and automatically rebuilds + restarts on every save:

Terminal window
chuks watch src/main.chuks

Under the hood it listens for filesystem events and manages child processes with proper isolation, the running server is killed and restarted cleanly on each change. Hot reload for Chuks development.

You can now organize routes into groups with shared prefixes and middleware chains:

var api = app.group("/api", authMiddleware)
api.get("/users", getUsers)
api.post("/users", createUser)
// Nested groups inherit parent middleware
var admin = api.group("/admin", adminMiddleware)
admin.get("/stats", getStats) // has both auth + admin middleware

Route-level middleware is eagerly flattened at registration time — when a route is added to a group, its middleware chain is computed once by merging parent middleware with group middleware. No per-request traversal.

This works in both VM and AOT modes with identical behavior.

The math standard library now has 83 new functions covering:

  • Trigonometric: sin, cos, tan, asin, acos, atan, atan2
  • Hyperbolic: sinh, cosh, tanh, asinh, acosh, atanh
  • Exponential/Logarithmic: exp, exp2, expm1, log, log2, log10, log1p, logb
  • Rounding: trunc, roundToEven
  • Special: erf, erfc, erfinv, erfcinv, gamma, lgamma
  • Bessel: j0, j1, jn, y0, y1, yn
  • Utility: cbrt, copysign, dim, hypot, mod, remainder, nextafter, ldexp, frexp, ilogb, scalbn
  • Classification: isInf, isNaN, signbit, inf, nan
import { math } from "std/math"
var angle: float = math.pi / 4.0
println(math.sin(angle)) // 0.7071067811865476
println(math.gamma(5.0)) // 24
println(math.erf(1.0)) // 0.8427007929497149

The AOT compiler now fully monomorphizes generic classes. When you write Box<int>, the compiler generates a dedicated specialized type with concrete typed fields — no dynamic dispatch, no runtime type switches.

class Box<T> {
var value: T
constructor(value: T) {
this.value = value
}
get(): T {
return this.value
}
}
var intBox = new Box<int>(42)
var strBox = new Box<string>("hello")

The AOT compiler:

  1. Scans the program for all generic instantiations (Box<int>, Box<string>)
  2. Generates specialized types with concrete field types
  3. Rewrites all references to use the specialized types

This means generic code compiles to the same efficient native code as hand-written specialized classes. The specialization system handles nested generics, generic superclass instantiation, and method-level generic parameters.

Task.all() runs multiple async tasks in parallel and waits for all of them to complete:

var results = await Task.all([
fetchUsers(),
fetchProducts(),
fetchOrders()
])

All tasks run concurrently and results are collected in order. If any task fails, errors are aggregated.

The std/channel module has been refactored from standalone exported functions to a class-based API:

import { channel } from "std/channel"
var ch = channel.create(1)
channel.send(ch, "hello")
var msg = channel.receive(ch)
  • Route group types — the typechecker now understands app.group() return types and validates middleware + handler signatures on grouped routes
  • Channel types — proper type tracking through channel send/receive operations
  • Method-level generic parameters — methods can define their own generic type parameters independent of the class
  • Generic superclass instantiationclass Foo extends Bar<int> correctly resolves the parent’s generic types
  • Variadic route handler typing — route methods accept variable numbers of middleware + handler arguments with correct type checking

The old monolithic bytecode compiler (1,177 lines) has been replaced by an IR-based pipeline. The new architecture is:

  1. Parser → AST
  2. IR Generator → intermediate representation
  3. Optimizer → dead code elimination (now handles AddressOf, Deref, IterKeys nodes)
  4. IR Compiler → bytecode

This also powers the new OP_ITER_KEYS opcode for for-in loops over maps, and deterministic map output (keys are now sorted alphabetically in string() representation).

  • Hover information expanded significantly (+452 lines) — richer type info, documentation hints, and signature details in the VS Code extension
  • Completion improvements for the new language features
  • ASI (Automatic Semicolon Insertion) fixed for complex nested expressions — the lexer now tracks block scope depth for parentheses and brackets separately
  • Cross-platform testing — CI now runs on ubuntu-latest, macos-latest, and windows-latest
  • Code coverage — new Codecov integration for tracking test coverage
  • Manual dispatch — CI can now be triggered manually via workflow_dispatch

The AOT runtime has been upgraded with significant performance improvements — a new garbage collector, faster hash maps, and improved memory allocation. These benefits apply automatically to all AOT-compiled Chuks programs.

The std/db module now provides a complete, production-grade SQL toolkit. Every feature works across all four supported databases (SQLite, PostgreSQL, MySQL, MSSQL) in both VM and AOT modes — 540 database tests passing.

No more manual begin/commit/rollback boilerplate. db.transaction() auto-commits on success and auto-rolls back on error:

import { db, DbDriver } from "std/db"
var conn = db.open(DbDriver.Postgres, databaseUrl)
var result = db.transaction(conn, (tx) => {
db.txExec(tx, "INSERT INTO orders (userId, total) VALUES (?, ?)", [1, 99.99])
db.txExec(tx, "UPDATE users SET balance = balance - ? WHERE id = ?", [99.99, 1])
return db.txQuery(tx, "SELECT balance FROM users WHERE id = ?", [1])
})
// If anything throws, both the insert and update are rolled back
println(result[0]["balance"])

Monitor your connection pool at runtime:

db.setPool(conn, "maxOpenConns", 25)
db.setPool(conn, "maxIdleConns", 10)
var stats = db.poolStats(conn)
println(stats["openConnections"]) // total open connections
println(stats["inUse"]) // currently in use
println(stats["idle"]) // idle connections
println(stats["maxOpen"]) // configured max
println(stats["waitCount"]) // total waits for a connection

Insert multiple rows in a single call, or upsert (insert-or-update on conflict):

import { QueryBuilder } from "std/db/query"
var qb = new QueryBuilder(conn)
// Bulk insert
qb.table("products").insertMany([
{ "name": "Widget", "price": 9.99 },
{ "name": "Gadget", "price": 24.99 },
{ "name": "Gizmo", "price": 14.99 }
])
// Upsert — insert or update on conflict
qb.table("products").upsert(
{ "name": "Widget", "price": 12.99, "stock": 50 },
["name"] // conflict columns
)

Repository equivalents: createMany() and upsert().

The Query Builder and Repository now have built-in sum(), avg(), min(), and max():

var totalRevenue = await qb.table("orders").sum("amount")
var avgPrice = await qb.table("products").avg("price")
var cheapest = await qb.table("products").min("price")
var mostExpensive = await qb.table("products").max("price")

New query methods for common SQL patterns:

// Range queries
qb.table("products").whereBetween("price", 10, 50).all()
qb.table("products").whereNotBetween("price", 100, 999).all()
// IN / NOT IN
qb.table("users").whereIn("role", ["admin", "moderator"]).all()
qb.table("users").whereNotIn("status", ["banned", "suspended"]).all()
// Raw WHERE for complex expressions
qb.table("orders").whereRaw("total * quantity > ?", [1000]).all()
// Distinct results
qb.table("orders").distinct().select(["customerId"]).all()
// HAVING for aggregate filters
qb.table("orders")
.select(["customerId", "SUM(total) as totalSpent"])
.groupBy(["customerId"])
.having("SUM(total)", ">", 500)
.all()

Process large datasets efficiently:

// Paginate — returns { data, total, page, perPage, lastPage }
var page = await qb.table("products").orderBy("name").paginate(1, 20)
println(page["total"]) // total matching rows
println(page["lastPage"]) // last page number
println(page["data"]) // array of rows
// Chunk — process rows in batches (memory-efficient)
await qb.table("logs").orderBy("id").chunk(100, (batch) => {
for (var i = 0; i < length(batch); i++) {
processLog(batch[i])
}
})

Execute arbitrary SQL through the repository:

var results = await userRepo.raw(
"SELECT u.*, COUNT(o.id) as orderCount FROM users u LEFT JOIN orders o ON u.id = o.userId GROUP BY u.id"
)
var single = await userRepo.rawOne("SELECT COUNT(*) as total FROM users")
await userRepo.rawExec("TRUNCATE TABLE sessions")

Create and drop indexes at runtime:

// Add a regular index
await UserSchema.addIndex(conn, ["email"], false)
// Add a unique composite index
await UserSchema.addIndex(conn, ["firstName", "lastName"], true)
// Drop an index
await UserSchema.dropIndex(conn, ["email"])

Schemas now support auto-timestamps and soft deletes:

const UserSchema = db.define<User>(conn, "users", (schema) => {
schema.pk("id").auto()
schema.string("name").notNull()
schema.timestamps() // adds createdAt + updatedAt
schema.softDeletes() // adds deletedAt (records are "soft deleted", not removed)
})
// Soft-deleted records are hidden by default
var users = await userRepo.all() // excludes soft-deleted
var all = await userRepo.withTrashed().all() // includes soft-deleted
var deleted = await userRepo.onlyTrashed().all() // only soft-deleted
await userRepo.where("id", 1).restore() // un-delete
await userRepo.where("id", 1).forceDelete() // permanently remove

Override hooks in your repository for cross-cutting concerns:

class UserRepo extends Repository<User> {
constructor() { super(UserSchema) }
override beforeCreate(data: any): any {
data["createdBy"] = getCurrentUserId()
return data
}
override afterCreate(data: any, result: any): void {
sendWelcomeEmail(data["email"])
}
override beforeDelete(): void {
logDeletion("users")
}
}

Verify database connectivity:

if (db.ping(conn)) {
println("Database is reachable")
}

By default, Chuks uses all available CPU cores. On shared servers where Chuks runs alongside other services (Node.js, databases, etc.), you can limit core usage with the --cpus flag:

Terminal window
# Limit to 4 cores
chuks run --cpus 4 src/main.chuks
# Equals syntax also works
chuks run --cpus=2 src/main.chuks

For AOT-compiled binaries, use the CHUKS_CPUS environment variable:

Terminal window
# Limit compiled binary to 4 cores
CHUKS_CPUS=4 ./build/my-app

Priority: --cpus flag > CHUKS_CPUS env var > default (all cores).

Chuks now supports bool() as a built-in type conversion function, joining int(), float(), and string(). It evaluates truthiness of any value:

var a: bool = bool(1) // true
var b: bool = bool(0) // false
var c: bool = bool("hello") // true
var d: bool = bool("") // false
var e: bool = bool(null) // false

This is especially useful when working with dynamic data from json.parse(), where field types are any and need to be explicitly converted:

var parsed: any = json.parse(req.body)
if (parsed.done != null) {
todo.done = bool(parsed.done)
}

bool() works in both VM and AOT modes.

The HTTP server now handles headers case-insensitively, following the HTTP/1.1 specification (RFC 7230). Previously, the server expected title-case headers like Content-Length, but many HTTP clients (including Node.js fetch) send lowercase headers like content-length. This caused request body parsing to silently fail when headers didn’t match the expected casing.

All header matching — Content-Length, Transfer-Encoding, and Expect — is now case-insensitive.

Fixed body parsing when Content-Length is the last header before the blank line separator. The header value parser previously relied on finding a \r delimiter after the value, which doesn’t exist for the final header. This caused the content length to be silently ignored, resulting in empty request bodies.

This release includes several important fixes to the AOT (ahead-of-time) compiler that improve correctness and VM/AOT parity.

Methods that reference module-level global variables no longer incorrectly shadow them with local declarations. Previously, a method like:

var hookLog: string = ""
class ItemRepo extends Repository<Item> {
override beforeCreate(data: any): any {
hookLog = hookLog + "beforeCreate,"
return data
}
}

would generate a local variable inside each method body in the AOT output, meaning writes to hookLog were discarded when the method returned. The global was never modified. Now, the compiler detects that the variable is a known module-level global (not a new var declaration) and skips the local declaration — assignments target the module-level variable directly.

Intentional local shadows still work correctly. If a method declares var hookLog = "local", the explicit declaration scopes it to the method, so subsequent assignments reference the local as intended.

The devirtualization (devirt) dispatch system — which converts interface method calls into direct concrete-type calls for performance — had an issue with inferred return types. When a method returns any but the compiler’s inference pass determines it returns a closure, the devirt code incorrectly assumed the native method signature had that concrete return type and skipped the type assertion. This caused compilation errors at the native code generation stage.

The fix restricts inferred-type usage in devirt to only “return this” patterns — where the method signature genuinely returns the concrete type — and leaves all other inferred types (closures, computed values) with proper type assertions.

Array literals containing maps (e.g. [{"name": "Alice"}, {"name": "Bob"}]) were incorrectly narrowed to a map-specific array type instead of []any. The narrowed type is not assignable to []any at the native code level, causing compilation failures at call sites expecting the broader type. The fix ensures map-element arrays always use the []any representation.

Fixed a bug where AOT-compiled closures that capture variables from their enclosing scope could fail to properly capture certain variable types, particularly when the closure was used as a callback in class methods or event handlers. The compiled output now correctly captures all referenced variables regardless of their type.

This release brings full array method parity between the VM and AOT compiler, and adds three new methods.

  • clear() — removes all elements from an array (VM + AOT)
  • keys() — returns an array of indices (VM + AOT)
  • flat() in AOT — was VM-only, now works in both runtimes
var nums = [10, 20, 30];
nums.clear();
println(nums.length); // 0
var items = ["a", "b", "c"];
println(items.keys()); // [0, 1, 2]
var nested: [][]int = [[1, 2], [3, 4]];
println(nested.flat()); // [1, 2, 3, 4]

Array callback methods (find, map, filter, forEach, some, every, reduce, flatMap, findIndex, findLast, findLastIndex) now consistently pass two arguments — (element, index) — in both VM and AOT. Previously the VM passed three arguments (element, index, array) while the AOT passed two, causing behavioral differences between runtimes.

The callback signature is now (element, index?) — the index is always optional:

var nums = [1, 2, 3, 4, 5];
// Simple — just the element
var found = nums.find((val: int): bool => {
return val > 3;
});
// With index
nums.forEach((val: int, idx: int) => {
println(idx, ":", val);
});
// Reduce — (accumulator, element, index?)
var sum = nums.reduce((acc: int, val: int): int => {
return acc + val;
}, 0);
MetricCount
Files changed80+
Lines added7,806+
Lines removed2,119+
Golden tests (VM + AOT)268
Database tests540 (4 DBs × VM + AOT)
New math functions83
New CLI commands9 (watch, add, install, remove, update, info, list, publish, publish —token)
Platforms supported5 (macOS arm64/amd64, Linux arm64/amd64, Windows amd64)