`**
* You get back the redacted text *and* metadata about what was detected
* Each span has position, confidence score, and type label
* The placeholders are typed and numbered, so you can track entities across documents
#### What It Detects
The model catches 8 categories of PII:
| Type | Example |
| ----------------- | ---------------------------- |
| `private_person` | Names |
| `private_email` | Email addresses |
| `private_phone` | Phone numbers |
| `private_address` | Street addresses |
| `private_url` | URLs with PII |
| `private_date` | Birth dates, sensitive dates |
| `account_number` | SSN, account numbers |
| `secret` | Passwords, API keys |
### Why This Matters for Multi-Agent Systems
#### Defense in Depth
You're already doing encryption at rest and TLS in transit. But what about **in-memory**? What about **logs**? What about **agent context windows**?
Redaction adds a layer that works regardless of where the data flows next. It's like sanitizing inputs at the edge -- except now the "web" is your entire agent infrastructure.
#### Compliance Without Friction
GDPR, CCPA, HIPAA -- they all say the same thing: **don't store PII unless you need to.**
Most agent systems store *everything* by default. Conversation history, tool call logs, error traces -- it's all there, unredacted, waiting for an audit.
With `privacy-python-server`, you redact before you log. Before you cache. Before you pass to the next agent. Compliance becomes architectural, not procedural.
#### Agent-to-Agent Hygiene
In multi-agent setups, Agent A passes context to Agent B, which calls Tool C, which logs to Service D. That's four hop points where PII can leak.
Put the redactor at the **inter-agent communication layer**, and every hop gets clean data. Agent B never saw Alice's email. Tool C never received her phone number. Service D only logged placeholders.
### Why Use a Server Instead of Importing the Model Directly?
You could just `pip install privacy-filter` and call it from your agent code. That works fine for simple cases. But there are real reasons to run it as a separate service:
#### Sharing Across Many Services
When you have multiple agents, multiple microservices, or multiple teams building on the same infrastructure, you want consistent PII handling. If each service imports the model directly, you get:
* Multiple copies of the same \~1.5GB model in memory
* Inconsistent configuration (different thresholds, different versions)
* Each service responsible for updating the model independently
* No centralized logging or monitoring of what's being detected
With a shared server, one service handles redaction for everything. Update the model version once, change confidence thresholds in one place, monitor detection rates from a single dashboard.
#### Resource Constraints: Edge, Serverless, Small Devices
This is the bigger reason.
Not every service that needs PII redaction can afford to download and run a 1.5GB machine learning model. Consider:
* **Serverless functions** (AWS Lambda, Cloudflare Workers) -- you hit package size limits and cold start times balloon
* **Edge computing** (Cloudflare Workers, Fastly Compute) -- limited memory, no persistent storage for model caching
* **Small containers** -- maybe your agent service runs in a resource-constrained environment where adding 1.5GB isn't feasible
* **Client-side applications** -- browser or mobile apps that can't bundle ML models at all
In these cases, offloading redaction to a dedicated server makes sense. Your lightweight service sends text over HTTP, gets back redacted text, and moves on. The heavy lifting happens somewhere with enough resources.
#### Operational Benefits
Beyond sharing and resource constraints, running it as a server gives you:
* **Auth and rate limiting** built-in -- control who can call it and how often
* **Health checks** -- know when the service is down before your agents start leaking data
* **Centralized logging** -- see what PII is being detected across your entire system
* **Independent scaling** -- if redaction becomes a bottleneck, scale this service without touching your agents
* **Language agnostic** -- your agents can be Python, TypeScript, Go, Rust, whatever. They all speak HTTP.
### Architecture
```
User -> Your Agent Infrastructure -> Airline API
|
privacy-python-server
/redact
|
Clean Logs
Clean Context
Clean Handoffs
```
The server is intentionally minimal:
* **FastAPI** backend (\~200 lines of Python)
* **OpenAI privacy-filter** model (runs locally, \~1.5GB) [\[1\]](#sources)
* **Optional auth** via Bearer tokens
* **Rate limiting** built-in
* **CORS support** for browser-based agents
* **Docker-ready** for deployment
Run it locally for development:
```bash
uv sync
cp .env.example .env
DEV_MODE=true uv run python server.py
```
Or with Docker:
```bash
docker build -t privacy-filter .
docker run -p 8000:8000 --env-file .env privacy-filter
```
First request downloads the model from HuggingFace (\~1.5GB), then caches locally. Subsequent requests are fast -- typically under 500ms for short texts.
### Integration Patterns
#### Logging Middleware
```python
import httpx
async def log_interaction(text: str):
response = httpx.post(
"http://localhost:8000/redact",
json={"text": text},
headers={"Authorization": f"Bearer {AUTH_KEY}"}
)
redacted = response.json()["redacted_text"]
logger.info(f"User message: {redacted}")
return text
```
#### Inter-Agent Communication
```python
async def send_to_agent(agent_url: str, context: dict):
redacted_context = {}
for key, value in context.items():
if isinstance(value, str):
resp = httpx.post(
"http://localhost:8000/redact",
json={"text": value}
)
redacted_context[key] = resp.json()["redacted_text"]
else:
redacted_context[key] = value
return await httpx.post(agent_url, json=redacted_context)
```
#### Pre-Storage Sanitization
```python
def store_conversation(conversation_history: list):
sanitized = []
for msg in conversation_history:
resp = httpx.post(
"http://localhost:8000/redact",
json={"text": msg["content"]}
)
sanitized.append({
"role": msg["role"],
"content": resp.json()["redacted_text"],
"pii_summary": resp.json()["summary"]
})
db.insert(sanitized)
```
### When to Use This (And When Not To)
**Use `privacy-python-server` when:**
* You have multiple services that need PII redaction
* Some of your services run in constrained environments (serverless, edge, small containers)
* You want centralized control over redaction behavior
* Your stack is polyglot and you don't want every language binding its own ML model
* You need operational features like auth, rate limiting, health checks
**Just import the model directly when:**
* You have a single monolithic service
* Resources aren't a concern
* You don't need cross-service consistency
* You want the simplest possible setup with no network hop
Neither approach is wrong. They're different trade-offs for different situations.
### Get Started
**Repository:** [github.com/thegreataxios/privacy-python-server](https://github.com/thegreataxios/privacy-python-server)
**Quick start:**
```bash
git clone https://github.com/thegreataxios/privacy-python-server.git
cd privacy-python-server
uv sync
cp .env.example .env
DEV_MODE=true uv run python server.py
```
**Test it:**
```bash
curl -X POST http://localhost:8000/redact \
-H "Content-Type: application/json" \
-d '{"text": "My name is Alice Smith, email alice@smith.com"}'
```
MIT licensed.
***
Sources
1. OpenAI, "privacy-filter" model, HuggingFace. [https://huggingface.co/openai/privacy-filter](https://huggingface.co/openai/privacy-filter)
import Footer from '../../snippets/_footer.mdx'
## Wake Me Up When the Price Hits $5,000
Encrypted transactions protect data in transit. Conditional Transactions (CTXs) allow encrypted data to be executed when a condition is met.
This is a new EVM primitive. CTXs enable top-tier automations and workflows: onchain poker, automated liquidations for lending & perps, limit orders on AMMs, sealed-bid auctions, and private agent negotiations where terms stay hidden until reveal time.
### What CTXs Enable
Standard EVM contracts store plaintext. Anyone with a block explorer can read state. CTXs change this — contracts store encrypted data and trigger decryption via callbacks.
The pattern:

1. User or agent submits encrypted data to a contract. It stays encrypted in storage.
2. When conditions are met, the contract calls `submitCTX()`
3. Validators batch all decryption requests from block *N*
4. Block *N+1* executes all `onDecrypt()` callbacks with decrypted data
:::info
The target is N+1 execution, but if block gas is exhausted on a given SKALE chain, decryption can slip to a later block. SKALE's horizontal scaling mitigates this — dedicated chains avoid the congestion that causes delays. A financial application could run its own SKALE chain to build a unique limit order book with dynamic AMM usage, all powered by CTXs.
:::
This is a two-block operation. The callback executes with an ephemeral sender address — not the original submitter.
### Comparison: Chainlink Automation
If you've worked with [Chainlink Automation](https://chain.link/automation), the mental model is similar. Both enable condition-triggered smart contract execution. The key difference: CTX adds encryption.
| Aspect | Chainlink Automation | SKALE CTX |
| --------------- | ----------------------------------- | --------------------------------------- |
| Pattern | checkUpkeep() → performUpkeep() | submitCTX() → onDecrypt() |
| Condition Check | Off-chain DON simulation (OCR3) | Onchain threshold decryption |
| Privacy | No — upkeep data is public | Yes — data encrypted until decryption |
| Dependency | External oracle network | Native chain infrastructure |
| Latency | Variable (depends on DON consensus) | N+1 blocks (target) |
| Failure Mode | performUpkeep can fail/revert | Atomic execution (subject to block gas) |
| Cost | Requires LINK token payments | Free (zero gas on SKALE) |
### Breaking EVM Assumptions
EVM developers assume atomicity. You call a function, state changes, the transaction succeeds or reverts — all in one block. CTXs break this assumption intentionally.
When `submitCTX()` executes, the decryption does not happen immediately. It queues for the next block's batch decryption. This means:
* You cannot read decrypted results in the same transaction
* Logic must be structured as callbacks, not synchronous reads
* Multiple decryption requests from the same block batch together
The batching is the feature. All requests from block *N* decrypt simultaneously in block *N+1*, processed in order of creation. If block gas is exhausted, remaining decryptions roll over to the next block. This eliminates timing advantages — a sealed-bid auction where all bids decrypt together is fundamentally fairer than sequential reveals.
### The Supplicant Interface
Contracts that receive CTX callbacks implement a standard interface:
```solidity
interface IBiteSupplicant {
function onDecrypt(
bytes[] calldata decryptedArguments,
bytes[] calldata plaintextArguments
) external;
}
```
When the committee decrypts, every registered supplicant's `onDecrypt()` fires. This enables complex multi-party workflows where multiple contracts react to the same decryption event.
*[source](https://github.com/skalenetwork/bite-solidity) — [docs](https://docs.skale.space/developers/bite-protocol/conditional-transactions)*
### Use Cases for AI Agents
**Encrypted Strategy Parameters.** A trading agent stores rebalancing thresholds, price triggers, and position limits encrypted onchain. The contract only decrypts and executes when market conditions match. Competitors see that something triggered — not what the thresholds were.
**Sealed Agent-to-Agent Negotiations.** Two agents submit encrypted terms for a task. The contract decrypts both simultaneously when both sides have submitted, executing on the overlap. Neither agent sees the other's terms until reveal — eliminating first-mover disadvantage.
**Conditional Autonomous Payments.** Payment triggers fire only after decryption confirms a condition — delivery of data, completion of compute, verification of a result. This is "if/then" financial logic enforced by the chain.
**Time-Locked Reveals.** Encrypted data that automatically decrypts at a specific block number. Research swarms publish findings simultaneously. Prediction markets seal predictions until events resolve.
### CTXs and Encrypted Transactions
| | Encrypted Transactions | CTXs |
| ------------------- | ---------------------------------------- | -------------------------------------------- |
| What's encrypted | Transaction data (calldata, destination) | Smart contract storage (state, parameters) |
| When it's decrypted | After block finalization | When contract triggers CTX |
| Protection model | Transit encryption | Conditional execution on encrypted state |
| Developer interface | Transparent (automatic) | Callback pattern (`onDecrypt()`) |
| Execution model | Same-block | Two-block (request in *N*, execute in *N+1*) |
Together they form complete privacy: encrypted in transit, encrypted state executed only when conditions are met.
### Want to Build with CTXs?
If you're building something that needs conditional execution on encrypted state, reach out.
import Footer from '../../snippets/_footer.mdx'
## Solving the Barista Test: A Private Money Solution
The barista test: when you buy coffee, the barista doesn't see your bank balance. Confidential tokens bring this privacy to blockchain — private balances, encrypted amounts, and transfers that reveal nothing to observers.
This is not a new primitive. Confidential tokens are an *application* — a pattern that combines encrypted transactions, conditional transactions, and re-encryption into a single use case. eUSDC is one implementation. The pattern works for any token.
### What Confidential Tokens Demonstrate
Standard ERC-20 tokens expose everything. Balances. Transfer amounts. Transaction history. Anyone with a block explorer can trace flows, identify whales, and analyze spending patterns.
Confidential tokens change this:
| Standard Token | Confidential Token |
| ------------------------ | ------------------------------------- |
| Balances public | Balances encrypted |
| Transfer amounts visible | Amounts encrypted |
| History traceable | Only sender/recipient see their flows |
| Contract sees all data | Contract executes on encrypted state |
| No privacy guarantees | Threshold cryptography secures data |
:::info
**How amounts stay private today:** The transfer amount is encrypted at the client level using `encryptTransaction` — the entire transaction object (including the amount) gets encrypted before hitting the chain. Once onchain, the amount can be re-encrypted to the recipient's viewer key. Encrypted amounts inside contract storage (fully encrypted balances) are being actively explored.
:::
The technology isn't new — it's the combination of three existing primitives applied to a standard token interface.
### The Three Primitives

**1. [Encrypted Transactions](/blog/programmable-privacy-encrypted-transactions.mdx)**
The client encrypts the entire transaction — including the transfer amount — before submitting to the chain. The mempool sees only ciphertext. Validators reach consensus without knowing transfer values.
```solidity
// Standard ERC-20: amount is plaintext
transfer(to, amount)
// Confidential: SDK encrypts the full transaction object
// Amount, destination, and calldata all encrypted
bite.encryptTransaction({ to, data, value: amount })
```
**2. [Conditional Transactions](/blog/programmable-privacy-conditional-transactions.mdx)**
Balance updates execute via CTX callbacks. The contract stores encrypted balances. When a transfer triggers, `submitCTX()` queues the state update. Next block, `onDecrypt()` executes the balance changes.
This ensures atomicity across multiple encrypted state changes. A transfer debits the sender and credits the recipient in the same decryption batch.
**3. [Re-encryption](/blog/programmable-privacy-re-encryption.mdx)**
Once the encrypted transaction executes, the transfer amount can be re-encrypted to the recipient's viewer key onchain. Only the recipient can decrypt their updated balance client-side.
### Example: eUSDC
eUSDC is an encrypted representation of Bridged USDC from Base on SKALE. It's not pegged to the dollar — it maintains 1:1 with the underlying deposit asset:
* **Deposits:** Bridged USDC (Base) → encrypted eUSDC via bridge
* **Transfers:** Amounts and balances stay encrypted
* **Withdrawals:** Encrypted eUSDC → Bridged USDC (Base) with amount reveal
:::info
This isn't limited to bridged assets. The same pattern works with a native stablecoin — any onchain token can have encrypted balances and transfers applied to it.
:::
The SDK usage is straightforward:
```typescript
import { mpp } from '@skalenetwork/mpp/client'
const method = mpp.charge({
chain: 'bite-sandbox',
currency: 'eUSDC',
extensions: {
skale: { encrypted: true, confidentialToken: true },
gasless: 'eip3009'
}
})
```
`encrypted: true` — Transaction amount encrypts onchain.
`confidentialToken: true` — Uses eUSDC with encrypted balances.
`gasless: 'eip3009'` — No gas fees (EIP-3009 permit signature).
### Integration with MPP and x402
Confidential tokens combine naturally with MPP (Machine Payments Protocol). I covered this in depth in [Confidential MPP on SKALE](/blog/confidential-mpp-on-skale.mdx) — the summary:
AI agents paying for services via MPP can use confidential tokens to hide both the payment amount and their remaining balance. Only the agent and the service provider see transaction details.
This matters for:
* **Agent spending patterns** — exact amounts hidden from competitors
* **Budget constraints** — total holdings not visible onchain
:::info
Confidential tokens encrypt amounts and balances, but the from/to addresses remain public. Transaction counterparts can still be seen onchain.
:::
The same applies to [x402](/blog/the-gasless-flow-behind-x402.mdx) payments. When an agent pays for a tool or API via x402, confidential tokens ensure the payment amount and the agent's balance stay private. The HTTP 402 response triggers a gasless payment — and with encryption, the chain only sees ciphertext.
### The Pattern, Not The Product
eUSDC is one example. The confidential token pattern applies broadly:
**Private Payroll.** Companies pay employees without revealing salaries to the world. Each employee sees only their own transfers.
**Agent-to-Agent Payments.** AI agents transact without broadcasting financial positions. Trading bots settle without revealing strategies via payment patterns.
**Sensitive Transfers.** Donations, legal settlements, and personal payments execute without public scrutiny.
**Compliance-Ready Privacy.** Re-encryption enables selective disclosure to auditors without revealing data to the public chain. Threshold decryption can trigger for authorized compliance keys.
Any token can be made confidential using this pattern. These primitives are native to SKALE — no custom VMs, no circuit languages, no rewrites required.
### What This Enables
Confidential tokens prove that programmable privacy isn't theoretical. The primitives work together to solve real problems:
1. Encrypted transactions hide amounts in transit (client-side encryption)
2. Re-encryption delivers private balances to recipients onchain
3. Conditional transactions enable encrypted state updates (being explored)
The result is a token with the same functionality as ERC-20 but privacy properties that match traditional banking. The barista test — passed.
### Want Confidential Tokens?
If you're issuing a token and want to add privacy to it, reach out.
import Footer from '../../snippets/_footer.mdx'
## Encrypting Intent: The Agent Infrastructure Gap
::authors
MEV bots consume over **50% of gas** on leading L2s while paying less than 10% of fees. [\[1\]](#sources) But the infrastructure cost is only half the story — the real damage is to users. Every transaction you submit broadcasts your intent in plaintext to the [mempool](https://ethereum.org/en/developers/docs/transactions/#the-transaction-pool): amounts, destinations, and strategies visible to every searcher, every validator, every [MEV](https://ethereum.org/en/developers/docs/mev/) bot. Encrypted transactions change this entirely.
[SKALE](https://skale.space) embeds [threshold encryption](https://docs.skale.space/learn/advanced-features/encrypted-transactions) into [consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/). Validators see ciphertext, not [calldata](https://ethereum.org/en/developers/docs/data-availability/). Amounts, destinations, and intents remain encrypted until a supermajority collaborates to decrypt. This is not a [Layer 2](https://ethereum.org/en/developers/docs/scaling/) add-on or an external oracle — it's protocol-level privacy with zero [Solidity](https://docs.soliditylang.org/) changes required. No TEEs, no external infrastructure.
### The MEV Problem
MEV (Maximal Extractable Value) is profit extracted from transaction ordering. The mempool is a dark forest — every pending transaction is visible and exploitable. But the risk isn't just abstract market inefficiency. It's your wallet.
#### How Users Get Hurt
| Attack Type | What Happens to You | Source |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
| Sandwich attacks | You swap 1 ETH for USDC. A bot front-runs your trade, buys the same token, and back-runs — you get a worse price. The bot pockets the spread. [\[1\]](#sources) | Flashbots, Jun 2025 |
| Front-running | Your limit order is visible before execution. Bots place identical orders ahead of yours, filling at better prices and leaving you with slippage. [\[2\]](#sources) | Oblivious Labs + Flashbots |
| Back-running | After your large trade moves the market, bots immediately trade against the price impact — extracting value you created. [\[2\]](#sources) | Oblivious Labs + Flashbots |
| Intent exposure | Your trading strategy, position size, and timing are public before execution. Competitors and arbitrageurs react in real time. | — |
| Censorship risk | Transactions can be filtered based on destination or calldata content before inclusion. Validators see everything. | — |
The mechanics are straightforward. A bot sees your swap transaction in the mempool. It front-runs with the same trade, driving up the price. Your transaction executes at the worse rate. The bot back-runs to capture the spread. This is a sandwich attack — profitable, legal, and entirely extractive. You pay the cost every time.
#### The Problem Is Getting Worse
Private mempools were supposed to solve this. Instead, they created a new arms race. Flashbots' 2025 research shows that MEV bots now consume **over 50% of gas** on top rollups [\[1\]](#sources) — they probe the chain with hundreds of failed transactions for every successful extraction, pushing fees up for everyone. A single arbitrage on Base requires \~350 failed transactions (\~132M gas). The same outcome that costs 200K gas on Ethereum costs 130M gas on Base — a **650x efficiency gap**. The infrastructure cost of MEV is passed directly to users through higher fees and worse execution.
[Flashbots' "MEV and the Limits of Scaling"](https://writings.flashbots.net/mev-and-the-limits-of-scaling) [\[1\]](#sources) concluded that MEV has become "the dominant limit to scaling blockchains" — and the primary victim is the end user.
### What Encrypted Transactions Solve
Encrypted transactions make the mempool opaque. Validators reach consensus on encrypted payloads. No one sees amounts, destinations, or function calls until after finalization.
Threshold encryption uses:
* **AES encryption** for transaction data
* **BLS threshold encryption** for committee-based decryption
* **Supermajority consensus** — >2/3 of validators required
* **Automatic decryption** post-finalization
The result: full mempool privacy during consensus. No front-running. No sandwich attacks. No back-running.
### How It Works

The top shows the vulnerability: transactions broadcast in plaintext to the mempool, visible to MEV bots that can sandwich attack. The bottom shows encrypted transactions: amounts and destinations remain hidden until finalization.
The committee public key encrypts. The committee private key — split across validators via threshold cryptography — decrypts. No single validator can decrypt alone. No external infrastructure required.
From a developer perspective, the change is minimal:
```typescript
import { bite } from '@skalenetwork/bite'
const encryptedTx = await bite.encryptTransaction({
to: contractAddress,
data: calldata,
value: amount
})
await wallet.sendTransaction(encryptedTx)
```
That's it. No contract rewrites. No circuit languages. No proving systems.
*[source](https://github.com/skalenetwork/bite-ts) — [docs](https://docs.skale.space/developers/sdks/skalenetwork-bite)*
### Use Cases
**Private DeFi.** Traders execute without broadcasting their positions. Strategies remain hidden until execution. No MEV extraction possible.
**Confidential Intents.** Users express desired outcomes — "swap ETH for USDC at market rate" — without revealing amounts or slippage tolerances. Solvers compete without seeing each other's bids.
**MEV-Resistant Settlement.** DCA (Dollar-Cost Averaging), TWAP (Time-Weighted Average Price), and other automated strategies execute without broadcasting timing or quantities.
**Censorship Resistance.** Encrypted transactions cannot be selectively filtered based on content. Validators see only ciphertext until consensus is reached.
***
Sources
1. Flashbots, "MEV and the Limits of Scaling," June 16, 2025. [https://writings.flashbots.net/mev-and-the-limits-of-scaling](https://writings.flashbots.net/mev-and-the-limits-of-scaling)
2. Oblivious Labs + Flashbots, "Scalable Oblivious Accesses to Blockchain Data," June 2, 2025. [https://writings.flashbots.net/scalable-oblivious-accesses-to-blockchain-data](https://writings.flashbots.net/scalable-oblivious-accesses-to-blockchain-data)
3. Flashbots, "Decentralized Building: Wat Do?," February 14, 2026. [https://writings.flashbots.net/decentralized-building-wat-do](https://writings.flashbots.net/decentralized-building-wat-do)
import Footer from '../../snippets/_footer.mdx'
## Storing Private Data Onchain
You encrypted data for one party. Now you need to share it with another — without revealing it to the world. Re-encryption makes this possible onchain.
SKALE's re-encryption primitive allows encrypted data to be forwarded across blocks and selectivley shared in two ways:
1. With a viewer key (i.e so each player's poker hand is encrypted for that specific player)
2. With the public bls key (i.e so the validators can decrypt and re-encrypt information across blocks)
The former is great for selective information that needs to be shared but not acted within the smart contracts on. The latter is perfect for when information needs to be stored privatley onchain and used within Solidity without leaking the information to the world.
### The Problem
Once data is encrypted to a specific key, standard encryption traps it. Either the holder decrypts and re-encrypts manually (exposing plaintext) or the data stays locked to the original recipient.
Onchain, this is worse. Smart contracts \[normally] can't hold secrets — any decrypted state is visible to all validators.
### What Re-Encryption Enables
Re-encryption uses threshold cryptography to transform ciphertext encrypted to one key into ciphertext decryptable by another — the committee decrypts via CTX, then re-encrypts to the new recipient.
### Use Cases
**Onchain Information Sharing.** Medical records, financial data, or proprietary research stored encrypted onchain can be selectively shared with new parties. The original holder authorizes — the chain executes the re-encryption without seeing content.
**Private Data Delegation.** An agent holding encrypted data can delegate access to another agent without transferring the decryption key. The delegator specifies the new recipient; validators handle the cryptographic transformation.
**Confidential Tokens.** Token balances encrypted to holder keys. Transfers re-encrypt to the recipient's viewer key. The contract never sees plaintext balances — only the sender and recipient can decrypt their own data.
**Encrypted Messaging Patterns.** Messages encrypted onchain that can be forwarded to new participants. Group membership changes without decrypting message history.
### How It Works
The protocol provides two encryption modes:
* **Threshold Encryption (TE)** — `BITE.encryptTE()` encrypts data to the validator committee. Only a supermajority of validators can decrypt, triggered via a CTX callback. Think of it as a sealed envelope that only the network can open.
* **Viewer Key Encryption (ECIES)** — `BITE.encryptECIES()` encrypts data to a specific public key (secp256k1). Only the holder of the corresponding private key can decrypt — client-side, no committee required. Think of it as a sealed envelope addressed to one person.
Re-encryption bridges these. Data encrypted via TE can be decrypted through a CTX callback, then re-encrypted to any viewer key within the same callback.

The committee holds threshold shares of the private key. No single validator can decrypt alone. A supermajority must collaborate to decrypt via CTX, and the contract's `onDecrypt` callback handles re-encryption to the recipient's viewer key.
```solidity
// Store threshold-encrypted data
bytes memory encryptedData = BITE.encryptTE(BITE.ENCRYPT_TE_ADDRESS, plaintext);
// Later: submit CTX to decrypt and re-encrypt in callback
BITE.submitCTX(BITE.SUBMIT_CTX_ADDRESS, gasLimit, encryptedArgs, plaintextArgs);
// The onDecrypt callback then calls BITE.encryptECIES() with the viewer key
```
*[source](https://github.com/skalenetwork/bite-solidity) — [docs](https://docs.skale.space/developers/bite-protocol/conditional-transactions)*
### Real Example: Confidential Poker
The [live poker demo](https://confidential-poker.vercel.app) demonstrates re-encryption in practice — [read the full walkthrough here](/blog/programmable-privacy-rebuilding-poker). Here's how it works:
When cards are dealt, each hole card is:
1. Threshold encrypted (`encryptTE`) for the showdown reveal
2. ECIES encrypted (`encryptECIES`) to the player's viewer key for client-side viewing
During the hand, only the player can see their hole cards — decrypted via their private key. At showdown, a CTX decrypts the threshold-encrypted version to evaluate the winner. No single entity ever sees all cards unencrypted until the protocol triggers reveal.
This pattern applies beyond poker:
* Private document sharing with viewer-key access
* Confidential escrow where terms encrypt to mediator and participants
* Selective disclosure for compliance (re-encrypt to auditor keys)
### Re-Encryption + CTXs
Re-encryption becomes more powerful combined with Conditional Transactions. A contract can:
1. Store threshold-encrypted data
2. Wait for conditions (time, payment, approval)
3. Trigger a CTX that re-encrypts to a new recipient
4. The new recipient receives without ever touching plaintext onchain
This is "encrypted info block over block" — data flows through the chain, transforming in encrypted form, only decrypting at final destinations.
### Building with Re-Encryption
If you're building applications that need private data sharing — onchain info sharing, data delegation, confidential tokens — re-encryption is live on SKALE chains. No external infrastructure. No TEEs. Standard Solidity patterns.
Reach out if you want to better understand how re-encryption and programmable privacy could help your business.
***
Sources
1. SKALE BITE Documentation, [https://docs.skale.space](https://docs.skale.space)
2. SKALE BITE Solidity Library, [https://github.com/skalenetwork/bite-solidity](https://github.com/skalenetwork/bite-solidity)
3. Confidential Poker Implementation, [https://github.com/TheGreatAxios/confidential-poker](https://github.com/TheGreatAxios/confidential-poker)
import Footer from '../../snippets/_footer.mdx'
## I Built Poker That Actually Works Onchain
Poker failed onchain because cards were visible. Every attempt — state channels, commit-reveal schemes, TEEs — compromised on speed, trust, or decentralization. Threshold encryption fixes this.
The live demo is running now: [confidential-poker.vercel.app](https://confidential-poker.vercel.app). AI agents play Texas Hold'em with encrypted hole cards, threshold-encrypted community cards, and onchain settlement. This is what programmable privacy enables — fair gaming with no trusted intermediaries.
### Why Poker Needs Privacy
Texas Hold'em has two information layers:
1. **Hole cards** — private to each player until showdown
2. **Community cards** — public after each dealing phase
3. **Actions** — public (bets, folds, raises)
On a standard blockchain, storing hole cards onchain means publishing them. Commit-reveal schemes add friction — players must come back online to reveal. State channels require liveness guarantees and dispute periods.
Dual encryption solves this:
* **Threshold encryption (TE)** — Cards encrypt to the committee for later decryption
* **ECIES encryption** — Cards encrypt to each player's viewer key for immediate viewing
Hole cards are dealt once, encrypted both ways. Players see their cards immediately via ECIES decryption. The contract holds TE-encrypted versions for showdown. Community cards deal via CTX callbacks — encrypted until the protocol reveals them.
### Technical Implementation
#### Smart Contracts
Three contracts run on SKALE Base Sepolia:
* **PokerGame.sol** — Game state, betting rounds, hand orchestration
* **HandEvaluator.sol** — Onchain hand ranking and winner determination
The core dealing function:
```solidity
function dealHoleCards(address player, bytes32 cardHash) external {
// Threshold encrypt for showdown reveal
bytes memory teEncrypted = BITE.encryptTE(cardHash, committeePublicKey);
// ECIES encrypt to player's viewer key
bytes memory eciesEncrypted = BITE.encryptECIES(cardHash, playerViewerKeys[player]);
// Store both versions
playerCards[player] = PlayerCards(teEncrypted, eciesEncrypted);
}
```
#### Community Card Dealing
The flop, turn, and river deal via CTX callbacks:
```solidity
function dealFlop() external {
// Three cards, encrypted to committee
bytes[3] memory encryptedCards = encryptCards(3);
// Submit CTX for batch decryption
BITE.submitCTX(
BITE.SUBMIT_CTX_ADDRESS,
gasLimit,
abi.encode(encryptedCards),
abi.encode(uint8(3)) // card count
);
}
function onDecrypt(bytes[] calldata decryptedArguments, bytes[] calldata plaintextArguments) external {
// Executed in next block with decrypted cards
uint8[52] memory liveDeck = abi.decode(decryptedArguments[0], (uint8[52]));
communityCards[0] = liveDeck[deckPosition];
communityCards[1] = liveDeck[deckPosition + 1];
communityCards[2] = liveDeck[deckPosition + 2];
emit FlopDealt(communityCards[0], communityCards[1], communityCards[2]);
}
```
Same pattern for turn (1 card) and river (1 card). Each phase queues in one block, decrypts and executes in the next.
*[source](https://github.com/TheGreatAxios/confidential-poker/blob/main/packages/contracts/src/PokerGame.sol)*
### Architecture Overview

The server handles game orchestration because card dealing requires sequential operations that don't fit single transactions. The contracts hold truth — game state, encrypted cards, betting balances. The server translates between player actions and onchain state.
Six AI agent personalities run in the server layer — each with distinct aggression levels, bluff frequencies, and playing styles. They evaluate hand strength, pot odds, and opponent patterns through a unified decision engine.
### Dual Encryption in Practice
When hole cards deal:
1. Server generates random cards (52-card deck, shuffled via SKALE RNG)
2. Cards encrypt via `BITE.encryptTE()` — committee decrypts at showdown
3. Cards encrypt via `BITE.encryptECIES()` — player decrypts client-side
4. Both versions stored onchain in `playerCards` mapping
Players see their cards immediately via client-side ECIES decryption. The contract can't see hole cards — only the TE-encrypted blobs. At showdown, a CTX triggers committee decryption, `onDecrypt()` evaluates hands, and winners receive payouts.
This pattern — dual encryption with different reveal conditions — applies beyond gaming:
* **Sealed-bid auctions** — Bidders see their own bids via ECIES; contract reveals all via TE at close
* **Private voting** — Voters confirm their vote via ECIES; results reveal via TE after deadline
* **Time-locked data** — Immediate access via ECIES; public reveal via TE at trigger time
### What This Proves
Confidential poker demonstrates that programmable privacy works at scale:
* **Real-time gameplay** — No commit-reveal delays, no dispute periods
* **Fair dealing** — Cards encrypted onchain, decrypted by protocol rules
* **AI agents** — Six distinct personalities making encrypted decisions
* **Live demo** — Playable now, not theoretical
The cryptography is production-ready. The UX matches Web2 expectations. The agents operate autonomously with private state.
This is the foundation for fair onchain gaming — not just poker, but any game with hidden information. Bridge, Hearts, Mahjong, strategy games with fog of war. All become possible when encrypted state is a native primitive.
### Repository and Demo
* **Live Demo:** [confidential-poker.vercel.app](https://confidential-poker.vercel.app)
* **Stack:** Vite, React 19, Hono, Foundry, Programmable Privacy
* **Status:** Complete and deployed
* **Want to play?** DM me on X [@thegreataxios](https://x.com/thegreataxios) for tokens to try it out
The repository is open source. If you're building confidential games, agent systems, or encrypted data flows, the patterns here are ready to adapt.
I'm actively building on SKALE daily. DM me if you want to walk through the implementation, integrate threshold encryption into your game, or explore confidential agent patterns.
***
Sources
1. SKALE BITE Documentation, [https://docs.skale.space](https://docs.skale.space)
2. SKALE BITE Solidity Library, [https://github.com/skalenetwork/bite-solidity](https://github.com/skalenetwork/bite-solidity)
3. Confidential Poker Source Code, [https://github.com/TheGreatAxios/confidential-poker](https://github.com/TheGreatAxios/confidential-poker)
***
import Footer from '../../snippets/_footer.mdx'
## Proof-of-Encryption in the Cloud
This article explores the revolutionary BITE Protocol that implements Proof of Encryption using threshold cryptography and multi-party signatures to enable fully encrypted blockchain transactions resistant to MEV attacks. Unlike traditional trusted execution environments, BITE embeds encryption directly into consensus through provable mathematics, requiring zero Solidity changes while offering cloud API accessibility for encrypted transactions across any programming language, with FAIR L1 blockchain pioneering the implementation before broader SKALE Chain adoption.
**BITE** is an innovative protocol from the SKALE Network ecosystem, launching first on the new **FAIR Layer 1 blockchain**. Designed for seamless integration and massive potential, BITE enables a wide range of critical functions—ushering in a new era of encrypted, private, and MEV-resistant blockchain usage.
The following post explores the key benefits of BITE, FAIR, and the upcoming SKALE Network upgrade, including a **unique way to attain Proof of Encryption (PoE) with zero changes required from developers**.
### The Benefits of BITE
While some of these benefits can arrive sooner depending on SDK implementation and adoption, I’ve organized them into **short**, **mid**, and **long-term** buckets.
#### 🟢 Short Term
* Fully encrypted transactions with 100% protection against MEV, including back-running
* Onchain traditional finance tools: private and FAIR TWAPs, DCA, and market orders
* Censorship resistance
* Simple integration with **zero changes to Solidity**
#### 🟡 Mid Term
* AI-powered onchain trading via enhanced encrypted financial tools
* End-to-end encryption with re-encryption inside a TEE (Trusted Execution Environment), enabling data forwarding to specific parties for private decryption
#### 🔵 Long Term
* Fully encrypted private state
* Onchain healthcare and banking use cases
* Fully encrypted **parallel execution** within the EVM
***
### How Proof of Encryption Works
**Proof of Encryption (PoE)** embeds encryption into the consensus layer of a blockchain. Unlike Layer 2 solutions (e.g. Unichain) that use TEEs in isolation, PoE **does not depend on decentralization alone**—it relies on **provable mathematics**.
> The SKALE Network core team has over seven years of experience building the world’s fastest leaderless BFT consensus. They’ve combined real-world application with rigorous mathematical proofs to pioneer PoE.
#### 🧠 How It Works
PoE uses:
* **Threshold schemes** +
* **BLS threshold encryption** +
* **Multi-party threshold signatures** +
* **Economic PoS security**
This combo allows encrypted transaction propagation, leaderless/asynchronous consensus, and decryption via supermajority—all secured cryptographically and economically.
The result? **Private, MEV-resistant, decentralized consensus**—unlocking trillions in new financial use cases.
***
### How to use BITE
**BITE Protocol** is the implementation of PoE when used with a compatible blockchain like FAIR or (soon) SKALE Chains.
The best part? **Zero changes to your Solidity contracts**.
#### Example Using BITE TypeScript/JavaScript Library

```bash
npm add @skalenetwork/bite
```
The library makes it easy to encrypt both transaction data and the `to` address in just a few lines of code.
***
#### What's with the Cloud?
Over the last several years of working in blockchain, I’ve realized one thing: **an innovative product is only useful if it’s easy to implement**. That’s why I collaborated with [@0xRetroDev](https://x.com/0xRetroDev) to build a simpler, cloud-based design for broader adoption.
#### Background
If you’ve heard of **Flashbots**, **CoW Swap**, or **Jito**, you know they’re tied to **MEV** (Maximal Extractable Value). If not, here’s a simplified breakdown:
* **MEV** is profit gained by reordering or inserting transactions.
* **Bad MEV** = front-running, sandwich attacks, back-running.
* **Good MEV** = arbitrage, liquidations that help price stability or protocol solvency.
* **Some firms (e.g. Jito)** make validators more profitable via MEV.
* **Others (e.g. CoW Swap)** attempt to *protect users* from MEV.
> **Bottom line:** MEV is mostly harmful and extracts value away from users.
#### Simplifying Adoption
Widespread usage builds a **network effect**. Just as Jito dominates Solana validators and MEV-blocker RPCs like CoW Swap are spreading, we aim for BITE to be universally accessible—across stacks, devices, and languages.
#### Phase I: BITE API
A PoC implementation is already live thanks to [@0xRetroDev](https://github.com/0xRetroDev):\
🔗 [BITE API GitHub Repo](https://github.com/0xRetroDev/bite-api)
This API allows any transaction to be encrypted by calling the endpoint. It’s ideal for:
* Environments without native BITE SDKs
* Languages outside JavaScript/TypeScript
* Setting up early MPC experiments or agentic flows
> ⚠️ **Note:** Because `eth_estimateGas` can't work properly with encrypted transactions, this can unintentionally leak user intent if used via 3rd-party RPCs.
A production-ready version will soon be hosted via [Eidolon.gg](https://eidolon.gg) for the FAIR + SKALE Communities.
***
#### Phase II: Private BITE API
To fully solve the **privacy problem**, we propose a unique infrastructure setup modeled on how FAIR and SKALE operate.
##### Infrastructure
1. Run a TEE (Trusted Execution Environment)
2. Generate a private key *inside* the TEE
3. Expose the **public** key via API
##### SDK Flow
4. Client requests public key
5. Client encrypts transaction payload using public key
6. TEE decrypts using internal private key
7. TEE re-encrypts using FAIR/SKALE BLS committee key
8. Returns encrypted payload to client
9. Client signs + broadcasts
This allows **any client**—C++, Kotlin, IoT, etc.—to securely use BITE without needing full Web3 tooling or native SDK support.
Yes, there are risks and trade-offs here. But I believe this is a great **early-stage design** for broader PoE adoption.
***
### 👋 About Me
Hi, I’m **Sawyer**, a software engineer, developer relations lead, and operational consultant with a background in healthcare and blockchain.
import Footer from '../../snippets/_footer.mdx'
## Scaling Authority on the EVM
This technical guide addresses the critical scaling challenges faced by authoritative servers in blockchain applications due to the EVM's sequential nonce requirement. By implementing a Signing Pool architecture using HD wallet-derived child signers, developers can overcome nonce collision issues and scale from handling just a few concurrent requests to hundreds per second, complete with automatic gas balance management and dynamic signer selection for high-throughput applications on zero-gas-fee networks like SKALE.
The [Ethereum Virtual Machine](https://ethereum.org/en/developers/docs/evm/) (EVM) is a distribute decentralized environment that executes code across all nodes in an EVM Network \[like [Ethereum](https://ethereum.org/) and [SKALE](https://skale.space/)]. In order to ensure that transactions cannot be replayed, the EVM utilizes a nonce value per account.
The account — often known as the wallet or the private key — must send transactions with sequential nonces in order to have successful execution. However, this is a direct limitation when exploring applications necessary architecture. Knowing that blockchain is a broader piece of the architectural stack for many; it’s no surprise that many developers lean on various types of centralized services operated by their team to “manage” their application. These centralized services are best referred to as **Authoritative Servers.**
#### Authoritative Servers
Servers that help manage and maintain the state of an application are a necessary evil. There are exceptions to the rule where some applications are able to create a suite of smart contracts that don’t rely on an external manager, however, in most cases the technical overhead is too large or difficult.
Running a server or many servers to manage authority within a game brings its own set of complications. Traditional CRUD APIs using Python + Flask, or Node.js + Express typically fall prey to a number of issues including: race conditions, lack of security, rate limits, etc. Blockchain CRUD APIs through a combination of 3rd party resources (e.g the blockchain) and race conditions can have issues related to scaling the accounts and their nonces.
* **Race Conditions:** A race condition is a software error that can occur when multiple processes or threads attempt to access the same data but the context is uncoordinated.
* **Lack of Security:** APIs themselves should require some form of authentication to utilize. Often times blockchain engineers don’t create authentication and authorization layers for their apps based on the user wallets which can open up the door for spam against various routes and even drain server gas tokens and funds.
* **Rate Limits:** When linking to 3rd party services whether a cloud database or a blockchain; rate limits during surges in use of a platform can really cause headaches.
* **Unscalable Nonces:** The blockchain specific issue occurs on EVM chains that have sequential nonces. During contract execution, the next nonce is usually set by the pending value from the chain itself. However, a single account trying to manage hundreds or thousands of requests at the same time can be overloaded and cause nonces to break.
#### Pitfalls of a Single Account
The use of a single account to manage a server is very common, however, not designed for scalability. Imagine the following Node.js + Express controller:
```ts
// controller.ts
import { Request, Response } from "express";
import { Contract, JsonRpcProvider, Wallet } from "ethers";
type RequestBody = {
gameId: string;
userWalletAddress: `0x${string}`;
}
export default async function(req: Request, res: Response) {
// Access to gameId str and userWalletAddress ethereum address
const { gameId, userWalletAddress } : RequestBody = req.body;
try {
// Provider connects to SKALE Calypso Mainnet
const provider = new JsonRpcProvider("https://mainnet.skalenodes.com/v1/honorable-steel-rasalhague");
// Wallet is for the server (one key) and uses the Calypso provider
const wallet = new Wallet("...privateKey", provider);
// Contract connects to a contract on-chain that stores on-chain game analytics
// This contract uses the wallet and provider above
const contract = new Contract("0x...", [...abi], wallet);
await contract.logPlay(gameId, userWalletAddress);
return res.send(200).status("Event Logged");
} catch (err) {
// Avoid sending private information to the client
return res.status(500).send("Internal Server Error");
}
}
// router.ts
import controller from "./controller";
import { Router } from "express";
const router = Router();
router.post("/games/play", controller);
export default router;
```
In the above code, a single wallet is being used to execute transactions for every request that hits the `POST /games/play` endpoint. If multiple requests start to come in at the same time, the blockchain request will begin to error out since the *Pending Nonce* would the the same for multiple requests at which point only the first would succeed.
One solution that has worked well for many of the projects I’ve worked with is to utilize a queue system. This can certainly work to ensure that nonces stay sequential, however, this does slow down responses back to the client during heavy load.
#### Upsides of a Pool
The concept of a **Pool of Signers** came about when I was doing solutions architecture for a Web2 to Web3 game transition. The game itself at the time was fully operational on Android and had a backend already built as part of it’s Web2 v2 build. Moving into the v3 build, the goal was to help bring greater visibility on-chain of the actual game and then utilize this visibility to help manage and validate incentives.
During the initial design of the v3, it became clear that one of the biggest limitations was the existing and actively growing user-base. Since critical actions for the user were gated by the in-game token, it made sense to push as many of those actions as possible to the client for greater scalability and utilization of the blockchain. However, the server acted as a gateway to Web3 for guest accounts as well as offering critical authority based on more traditional API calls from client to server.
It became clear that a single signer on a single server just would not be efficient. From there, two different designs came about. The first was to utilize a pool of multiple signers to handle higher load. This allowed each account in the pool to send one transaction at a time before the next signer was selected. With some strategic decisions made to abstract the Signing Pool over to a separate resource that all the controllers (or underlying services) could call into; it allowed scalability to go from a few requests per second with no issue to hundreds of requests per second with no issue.
```ts
// engineManager.ts
import { HDNodeWallet, JsonRpcProvider, TransactionReceipt, TransactionRequest, Wallet } from "ethers";
type InternalSigner = {
wallet: Wallet;
nonce: number;
active: boolean;
checks: {
gas: boolean;
}
}
class SigningManager {
#currentSignerIndex = 0;
#baseWallet: HDNodeWallet;
#rpcUrl: string;
#signers: {[key: number]: InternalSigner} = {};
protected baseProvider: JsonRpcProvider;
constructor( seed: string,
signerCount: number = 1,
rpcUrl: string ) {
this.signerCount = signerCount;
this.#rpcUrl = rpcUrl;
this.baseProvider = new JsonRpcProvider(rpcUrl);
this.#baseWallet = Wallet.fromPhrase(seed, this.baseProvider);
this._initializeWallets(signerCount);
}
private async _initializeWallets(signerCount: number) {
let addresses = [];
for (let i = 0; i < signerCount; i++) {
const _wallet = new Wallet(this.#baseWallet.deriveChild(i).privateKey, new JsonRpcProvider(this.#rpcUrl));
this.#signers[i] = {
wallet: _wallet,
nonce: await _wallet.getNonce(),
active: true
};
addresses.push(_wallet.address);
};
if (process.env.NODE_ENV === "development") console.log(`Wallets for ${this.use}`, addresses.join(",\n"));
}
public async sendTransaction(request: TransactionRequest) : Promise {
const signerIndex = this.selectSignerIndex();
const signer = this.#signers[signerIndex];
const balance = await this.baseProvider.getBalance(signer.wallet.address);
if (balance === BigInt(0) && balance > BigInt(request.value ?? 0)) {
this.#signers[signerIndex - 1] = {
...signer,
active: false,
checks: {
gas: true,
}
};
return await this.sendTransaction(request);
}
const tx = await signer.wallet.sendTransaction({
gasPrice: 100_000, // set for SKALE to maintain lowest gas consumption
...request
});
return await tx.wait();
}
private get signerCount() {
return Object.keys(this.#signers).length;
}
private selectSignerIndex() {
const signerIndex = this.#currentSignerIndex;
if (signerIndex + 1 === this.signerCount) {
this.#currentSignerIndex = 0;
} else {
this.#currentSignerIndex++;
}
return signerIndex;
}
}
export default new SigningManager();
```
````ts
// controller.ts
import { Request, Response } from "express";
import { Contract, JsonRpcProvider, Wallet } from "ethers";
import SigningManager from "./SigningManager";
type RequestBody = {
gameId: string;
userWalletAddress: `0x${string}`;
}
export default async function(req: Request, res: Response) {
// Access to gameId str and userWalletAddress ethereum address
const { gameId, userWalletAddress } : RequestBody = req.body;
try {
await SigningManager.sendTransaction({
to: "0x...contractAddress",
data: contract.interface.encodeFunctionData(
"logPlay",
[gameId, userWalletAddress]
)
});
return res.send(200).status("Event Logged");
} catch (err) {
// Avoid sending private information to the client
return res.status(500).send("Internal Server Error");
}
}```
```ts
// router.ts
import controller from "./controller";
import { Router } from "express";
const router = Router();
router.post("/games/play", controller);
export default router;
````
The addition of the signing manager not only makes the controls cleaner, but it allows for a range of 1 to 2²⁵⁶-1 signers in the single engine manager. Of course subject to local resources on the machine. Every signer in the batch manager must have the necessary amount of gas, but especially when designing solutions like this on SKALE; you have the ability to have contract calls top up the signers every time so they never running out.
The solutions listed above aren’t for all developers. You can of course modify this in a number of ways including adding different managers per route for maxmium scalability. You should always use a different seed per server to avoid conflicting nonces across multiple machines as well.
Additionally, it is important to note that this solution does **NOT** utilize Account Abstraction/ERC-4337 in any way. That functionality can be useful to handle client operations, however, this is designed for secured authority. The above code examples are all great examples of how to design a highly-scalable authority layer for your next Web3 project.
***
For builders interested in taking advantage of this, make sure to head over to [https://docs.skale.space](https://docs.skale.space) and start building now.
import Footer from '../../snippets/_footer.mdx'
## SKALE Governance Update - July 7, 2025
This governance update examines SKALE Network's remarkable achievement of surpassing 1 billion cumulative transactions while maintaining zero gas fees and instant finality. The analysis covers the SKALE DAO's hybrid governance model combining onchain economic voting with offchain technical consensus, and explores how the upcoming FAIR L1 blockchain addresses critical ecosystem challenges by enabling permissionless DeFi deployment and reducing operational costs through a synergistic gas-fee architecture that captures value within the SKALE ecosystem.
I've been building in the SKALE Ecosystem for somewhere in the range of 4-5 years now.
In that time, I've worked with a lot of projects in the ecosystem and Web3 as a whole.
I have prepared a quick update from my perspective regarding SKALE, active governance initiatives, and the new L1 coming to support the SKALE Ecosystem from the SKALE team called FAIR.
You can read the forum post [here](https://forum.skale.network/t/skale-governance-update-with-a-note-on-fair/658) or read directly here on my blog.
SKALE is one of the most innovative blockchain networks in the world. FAIR is designed to help grow the SKALE ecosystem. Adding a real Layer 1 network to the SKALE ecosystem—if executed correctly—will create a synergistic effect. It also allows the SKALE project to continue its history of innovation while bringing a critical component the network has struggled to attract: decentralized value.
### Background
To the decentralized SKALE Network Community of SKL token holders, SKL delegators, validators, builders, core developers, and users:
SKALE is an open-source, community-driven project that has operated with a clear North Star for over seven (7) years: bringing the power of Ethereum to billions.
Over the years, SKALE has achieved a variety of incredible innovations, including but not limited to:
* The launch of the world’s first natively EVM multi-chain scaling solution in 2020, **the SKALE Network**
* The launch of the world’s first network of EVM blockchains capable of interchain communication through the **SKALE V2** upgrade
* The launch of **SKALE V3** in 2024, which doubled throughput (TPS) and reduced block mining time in half, making the already performant network even faster
All of these key industry-changing events and upgrades were done across a decentralized network of operators running hundreds of nodes. Additionally, SKALE brought the world a variety of innovations that the rest of the blockchain space has struggled to replicate without centralization or high fees such as:
* Trusted Execution Environment (TEE)-based security
* Onchain Machine Learning
* Provable Random Number Generation
* Decentralized file storage and Content Delivery Network (CDN)
* Decentralized TEE-protected oracle
* Multi-transaction Mode
* Decentralized and fully autonomous bridging
Arguably the most important and well-known innovation that SKALE has brought to the world—and continues to dominate with to this day—is the zero gas fee model, backed by high collateral, high performance, and a sophisticated economic model.
### Update on SKALE
The past year has been an explosive period of growth and excitement for the SKALE Network.
On the ecosystem side, SKALE hit **1B+ cumulative transactions and 100M+ transactions in a single month** ([source](https://skale.space/blog/skale-ecosystem-recap---april-2025)). Gaming adoption continues to flourish, with launches of amazing games like Gunnies and Data2073 being highly successful, as well as established SKALE Network games like World of Dypians, Pixudi, and BitHotel continuing to grow and push more and more on SKALE. SKALE also rolled out a $2M Indie Game Accelerator, became the first and only blockchain in Unity’s Publisher Program, and more recently onboarded projects in other key areas like AxonDAO—a unique DeSci project focused on enhancing the value and privacy of health data—and many others like XO (AI) and ReneVerse (Advertising).
On the technical front, SKALE announced **BITE Protocol**. BITE, which stands for Blockchain Integrated Threshold Encryption, is the basis for the future of a private and secure EVM that sacrifices nothing in terms of performance or decentralization. This shift will give developers access to trustless privacy by default with all the tools they know and use.
SKALE has also made major technical strides with the announcement of FAIR, the world’s first MEV-resistant Layer 1 blockchain that brings encrypted, asynchronous execution to the EVM. It will pioneer the use of BITE Protocol and set the stage for SKALE Chains to adopt and upgrade to the FAIR SDK.
Supporting tools and infrastructure like the SKALE Portal—which recently upgraded to v4.1—and the SKALE Explorer also saw major UX and infrastructure upgrades alongside a major overhaul to the [SKALE Network Documentation](https://docs.skale.space) from a combination of network constituents, including core developers and 3rd-party contributors.
With FAIR unlocking DeFi and liquidity for SKALE while also offering a key enhancement for network operations, SKALE is well positioned to be the most dominant network of blockchains in the world.
### Refresher on the SKALE DAO and SKALE Network Onchain Governance
The SKALE DAO design embodies similar designs to the most successful Layer 1 ecosystems in the world like Ethereum and Solana, which both utilize an offchain forum and development process backed by various entities, core teams, foundations, as well as other 3rd-party contributors to develop the network.
One difference these projects have from SKALE is that Ethereum and Solana do not have any onchain governance. All network decisions are made ultimately by those who can execute pull requests in GitHub. There is no voting, just conversation and ultimately a decision made by project leaders and contributors.
The SKALE DAO further decentralizes the above process by allowing the SKL delegators to directly collaborate on the economic direction of the network by voting on network economic parameters such as inflation, slashing, chain pricing, etc. It empowers SKL token holders—specifically delegators—to shape the network’s economic direction by proposing initiatives and voting directly on key economic parameters.
It is very important to note that many project decisions that lie outside of the direct economic factors as mentioned above are intentionally excluded from onchain voting and are instead determined through conversations and discussions amongst a large group of stakeholders, concluding with an offchain consensus. Said otherwise, SKALE makes decisions outside of direct economic factors in the same way Ethereum and Solana make decisions. Decisions involving roadmap, product development, engineering project planning and prioritization, grants, marketing strategy, and business development continue to fall under the purview of key network contributors such as validators, dApp developers, and core team contributing entities like SKALE Labs.
You can read more about the DAO governance here:
* [https://snapshot.box/#/s\:skale.eth/proposal/0xebbc76cf6bd1afd7e1271f4339c7c04703dbe8dda78b1a731ffaf126772c0051](https://snapshot.box/#/s\:skale.eth/proposal/0xebbc76cf6bd1afd7e1271f4339c7c04703dbe8dda78b1a731ffaf126772c0051)
* [https://forum.skale.network/t/a-proposal-for-skale-networks-on-chain-governance-system/447](https://forum.skale.network/t/a-proposal-for-skale-networks-on-chain-governance-system/447)
A good example of this in action is comparing chain pricing decisions and recent product roadmap decisions.
**Chain Pricing**: The core team received a number of requests from key stakeholders to increase pricing of chains to capture more economic value. There was discussion in the forum followed by many discussions between validators, dApp developers, and the core team. Ultimately, a governance proposal was formally submitted and voted on, and the specific outcome was an economic change within a smart contract that changed pricing.
**Broader Roadmap**: Conversely, product roadmap decisions are made in the same manner Ethereum and Solana make decisions—not by onchain voting. In the case of FAIR, many key stakeholders, including validators, developers, and stakers, brought forward feedback to core contributors that SKALE needed to capture more economic value. A consistent idea brought forward was launching a SKALE ecosystem Layer 1 chain. This was publicly discussed on the forum last November: [https://forum.skale.network/t/ideas-from-the-community-the-evolution-of-skale/533](https://forum.skale.network/t/ideas-from-the-community-the-evolution-of-skale/533).
Based on the positive feedback, the core contributors had many discussions with dApp devs, validators, stakers, infrastructure partners, and more. The result of these discussions was that the roadmap should include a Layer 1 chain—but it would need to be a true L1 chain in order to give the ecosystem a real opportunity to capture TVL. This meant that the L1 would need its own native token and could not use the bridged SKL token as its genesis token. This is because the highest point of security of the L1 would be the bridge and not the blockchain—if you hacked the bridge, then the entire chain would be compromised.
A new idea was then brought forward to make the new L1 a dual-token network. This would increase the utility of the SKL token through burning functions in the L1 while enabling the L1 to truly be an L1 that is secured by a native token. This premise was then discussed by numerous stakeholders and contributors, more feedback was integrated, and it was then added to the roadmap and announced in June. It was also announced with the caveat that any changes needing to be made to the SKALE Network smart contracts and core economic functions would first need to be ratified by an onchain vote before being finalized.
### SKALE DAO Initiatives
The SKALE DAO is actively exploring a number of key initiatives, which I’ll outline here:
**SIP-3 Performance Chains**: Already out in the open, SIP-3 is very exciting and I believe nearly ready to bring to the DAO. I’m working with the SKALE team and various potential chain buyers to ensure we are coming in at a price that is both competitive with the broader market while also ensuring that validators are properly compensated for the security they bring—compared to Layer 2s and Layer 3s, which provide no economic security or decentralization.
**FAIR**: While the roadmap itself falls outside the purview of the SKALE DAO, various future actions regarding the synergy between FAIR and SKALE may come to the SKALE DAO—such as the location of key network components related to economics.
Additionally, multiple threads are already open in the SKALE Forum for features and ideas that have been requested by various SKALE developers and teams that are solved by FAIR:
**Permissionless SKALE Performant Technology**: SKALE is incredibly performant and highly stable. Most of the developers I’ve worked with, once they start using SKALE, don’t want to leave. However, developers have been asking to do things like token launchpads, onchain messaging, permissionless DeFi and token creation—things that don’t align well with the containerized design of SKALE Chains, which are generally not designed to support the level of state that permissionless chains do.
**Enhanced DeFi with Gas Fees**: While much of blockchain—especially on the EVM—can be done without a native gas token, as proven by the SKALE Network, many DeFi protocols and key infrastructure components build on the native gas token directly, or at least by default support using it and wrapping automatically. I proposed in the forum a gas fee SKALE Chain and it was met with pretty open feedback. I think FAIR—which combines both gas fees and the permissionless blockchain layer—makes a ton of sense toward solving this proposal.
I believe the SKALE DAO and broader SKALE developer community are in a fantastic position. The next six (6)–12 months are going to be an incredible time to get involved on the forum, with the DAO, or come build on SKALE if you are not already. While the above are some of the active initiatives I’m working through currently with various teams in the SKALE Community, there are also a number of other topics that are being researched based on community requests such as offsetting inflation through SKL burning.
Interested in building on SKALE or contributing to the DAO but not sure where to start? DM me on Twitter, Telegram, or Discord at TheGreatAxios or tag me in a forum post.
### A FAIRer Future
I’m also very excited to share with you a quick update on FAIR and how it fits in with SKALE. I believe FAIR is going to be one of the most important components of helping SKALE succeed both in the short and long term. The following are my opinions from collaborating with SKALE Labs and the NODE Foundation on the design and sharing my goals for this chain:
#### Solving the DeFi Puzzle
DeFi and liquidity are both critical factors developers use to evaluate what network to build on or use. Time and time again, we see developers choose blockchains that seem promising on paper—but fall short in practice. The reason? They’re propped up by millions in inactive TVL that doesn’t actually support real usage.
When launched in 2020, SKALE chose to focus on high-performance applications with a focus on gaming due to the unique network design. While it has paid off quite well and allowed SKALE to consistently win developers building games and looking for the fastest blockchain, SKALE has struggled to attract DeFi and TVL.
FAIR is an opportunity for the SKALE Network to address the value issue by offering a fully permissionless chain where anybody can deploy tokens, RWAs (Real World Assets), stablecoins, NFTs (non-fungible tokens), and protocols—without needing to work through a SKALE Chain owner or operator to attain permission to deploy.
#### Solving SKALE’s Scaling Block
While there are many blockchains being created today, most of them are really just glorified servers running an EVM. They lack the decentralization, the fault tolerance, and the economic security collateral that a network like SKALE has to offer.
However, two areas that SKALE currently struggles with are operational costs and value capture. With SKALE Manager running on Ethereum and base liquidity being sourced from Ethereum as well, key network operation costs for users can often be $5–$15 in ETH when gas prices are low—and easily stretch into the dozens or hundreds of dollars when gas spikes during congestion. Compute-intensive operations like creating SKALE Chains, which cost over 0.5 ETH (over $1,350 at time of writing), are not feasible for more cost-effective SKALE Chains. Additionally, all gas fees spent on operations are lost to the Ethereum ecosystem and not captured by the SKALE ecosystem.
FAIR has the opportunity to solve both problems at once—with both cheaper fees (the chain will have a gas token), while also allowing a synergistic chain to capture the revenue spent instead of bleeding to a competitor.
### Conclusion
FAIR is the biggest upgrade coming to SKALE, in my opinion, since V2. A true technical innovation that other blockchains just can’t compete with, the native MEV resistance and future privacy features coming with BITE Protocol—alongside the direct benefits that SKALE will attain—are exciting.
Ultimately, my goal is to help bring these ideas to life and contribute what I can to the vision. I can honestly say that everyone I’ve talked to about this is incredibly excited and are using words like “it makes total sense” when hearing about the FAIR and SKALE synergy.
Validators, developers, and token holders alike are excited—and I’m excited to work with everyone to bring this vision to life.
import Footer from '../../snippets/_footer.mdx'
## SKALE's Secret Sauce for Game Developers
This comprehensive guide explores how SKALE Network revolutionizes game development by providing blockchain infrastructure that replaces traditional servers, databases, and storage systems with zero-gas, high-performance alternatives. Supporting 500-13,000 transactions per second with instant finality and native random number generation, SKALE enables developers to build asset-based games, real-time multiplayer experiences, leaderboards, and autonomous worlds while eliminating the cost barriers that make blockchain impractical for gaming on other networks.
SKALE offers developers solutions to streamline game development, reduce server management costs, and remove scaling challenges. With features like zero gas fees and instant transaction finality, this blockchain network empowers you to create robust multiplayer experiences and manage in-game assets efficiently, keeping your focus on crafting exceptional games.
A big thank you to [Ben Miller, Head of Partner Marketing at SKALE Labs](https://x.com/benjmiller88) for all his incredible feedback and edits on this detailed blog.

### Game Development with SKALE
***
#### SKALE Primer
SKALE is a blockchain network home to many Layer 1 blockchains. You can think of a blockchain as a hybrid compute machine \[kind of like a cloud server] that offers compute and storage to a developer without requiring the developer host the server directly. These machines are operated by 3rd parties known as validators. The term validator comes from “someone who validates” i.e. the one who is building the chain.
If you are familiar with Web3 as a whole, a nice analogy for a SKALE Chain is **a mini Ethereum with super powers**. Offering all the capabilities of the first programmable blockchain plus the super powers seen here.
##### Super Powers of a SKALE Chain
**Zero Gas Fees**
Historically blockchains have used fees at the transaction level called **gas fees** or **transaction fees** depending on the ecosystem you are in where every **write** or **action** costs the sender some amount in fees. Gas fees are one of the most popular pain points for gamers who have a very fair complaint that gas fees prohibit them for focusing on the game.
SKALE eliminates gas fees entirely with an innovative Blockchain-as-a-Service (BaaS) model. [Learn more about SKALE Chain Pricing in the SKALE Forum](https://forum.skale.network/t/enhancing-the-evolution-of-skale-chain-pricing-and-moving-into-network-growth-phase-2/468).
**Instant Finality**
Traditional databases have instant settlement and finality. *What does this mean?* When you send a **write** to \[most] databases, it takes a single cycle for it to be able to be **read** back and final in the database. The majority of blockchains do not work this way. They either require many cycles or **blocks** to become final OR they rely on other chains to prove their finality. This makes them highly inadequate and inefficient for gaming.
SKALE Chains and the underlying consensus operate in a similar manner to traditional server and database systems. When a transaction is sent on a SKALE Chain it is immediately known whether it will be successful or not. After submission to the chain, it takes a single cycle (i.e. 1 block) to ensure that it is final and will be fully readable back. These blocks generally take around one (1) second, however, they can be even faster as a chain is put under more load.
**High Throughput**
Every computer and software system in the world has limitations. Traditional systems and blockchain systems share many similarities and differences. One of the most common is that reads are more common than writes and so systems are optimized for this. On average, a blockchain will experience a minimum of 5–10 reads for every one write due to the amount of calls that are made to access key information needed to execute and check for a transaction.
SKALE Chains by default are highly fault tolerant by making use of 16 high performance machines operated by 3rd party validators. Each of these machines is easily capable of handling tens of thousands of concurrent requests while simultaneously building the chain through **execution of functions and storage of information** at a minimum rate of **500 calls per second** and a theoretical maximum of **\~13,000 calls per second**.
> For those familiar with blockchain, calls are equal to transactions.
**Native Random Number Generation**
Random numbers are one of the most commonly used features within software development. They are especially prominent within game development. SKALE Chains offer native RNG capabilities directly in Solidity that allow for the **infinite creation of provable random numbers** to be used for anything the developer sees fit.
Random values can be used for map creation, asset allocation, randomized selection, seed generation, and more. SKALE is the only blockchain that offers RNG functionality directly at the chain level **for free**. Other chains rely on 3rd party services like Chainlink or Pyth which can be both highly centralized, slow, costly, and complicated to develop with.
***
#### Popular Web3 Gaming Approaches
The following approaches are some of the most common types of games types and higher level mechanics that make sense to bring onchain.
1. **Asset-Based Games (e.g., Farmers, Clickers, Strategy Games)**
SKALE is ideal for handling in-game assets such as inventory, upgrades, and items in asset-based games. By leveraging blockchain technology, developers can create a transparent, player-owned economy, where players truly own their in-game assets. This opens up new opportunities for trading, crafting, and evolving the game world over time, all while maintaining a seamless player experience.
2. **First-Person Shooters (FPS) and Real-Time Games**
Traditional FPS games rely on local servers for player interactions, typically grouping players based on geographical proximity to reduce lag. With SKALE, developers can utilize decentralized blockchain infrastructure to handle crucial gameplay elements in real-time, such as loadouts, player positioning, statistics, and map configurations. This allows for a more dynamic and interactive experience, especially in large-scale multiplayer games.
3. **Leaderboards and Rankings**
SKALE is well-suited for managing competitive elements like leaderboards and rankings. Blockchain’s immutability ensures that rankings are transparent and tamper-proof, giving players confidence that their achievements are accurately represented and securely stored. Moreover, with SKALE’s scalability, even large-scale leaderboards can be handled efficiently, ensuring that players from all around the world can compete in real-time without lag or delays.
4. **Player Lobbies and Matchmaking**
Managing player lobbies and matchmaking in real-time can be a logistical nightmare for developers using centralized services. SKALE enables decentralized matchmaking systems, where player data and session information are securely stored and easily accessible across a distributed network. This ensures a fair and transparent matchmaking process while allowing for seamless lobby creation and management.
5. **Massively Multiplayer Online (MMO) Games with Dynamic Economies**
MMOs are perhaps the best example of a game type that benefits from decentralized infrastructure. With SKALE, developers can extend their games’ economic systems by enabling decentralized marketplaces, dynamic item economies, and player-driven world-building. The scalability of SKALE ensures that even in large MMO worlds, player interactions and in-game economies can be managed securely and efficiently, without the bottlenecks associated with centralized servers.
6. **Turn-Based Games**
In turn-based games, player moves and game states must be securely stored and shared in a way that ensures fairness and transparency. SKALE enables developers to store turn data and game states on the blockchain, allowing for decentralized decision-making and eliminating the need for centralized server management. This leads to a more player-driven, secure, and transparent gaming experience.
7. \*\*Open and Autonomous Worlds (Minecraft-Style Games) \*\*
Blockchain technology’s decentralized nature and transparency make it ideal for creating open, player-driven worlds that can be modified, extended, and evolved autonomously. Similar to Minecraft, players in SKALE-powered worlds can develop, build, and create content in a persistent environment where the game’s code and assets are publicly accessible. This allows for community-driven mods, world extensions, and dynamic, player-controlled content. The blockchain ensures that these modifications are secure, transparent, and permanent, fostering a rich, collaborative environment that grows with the community over time.
***
#### Development Mechanics
In addition to the high level approaches above, there are some lower level mechanics that developers can mix and match when looking to enhance their games with blockchain. The following is designed based on the technology of the SKALE Network since it takes into account critical features such as zero gas fees, instant finality, native RNG, and high throughput.
##### Digital Collectibles
Arguably the place where Web3 gaming got its start is in the form of digital collectibles, commonly known as Non Fungible Tokens \[or NFTs] within the blockchain space. These assets come in many different forms however the general goal is to allow assets to be represented on a chain and be owned directly by users.
The great part about digital collectibles is from an operational perspective they can be created and used in many different ways including both valuable and in-game only collectibles. Collectibles can also be made non-transferable (i.e. soulbound). Lastly, collectibles can be made incredibly custom to where you can use multiple collectibles to create others or even make a single asset fungible through additional tokenization.
For instance if you have collectible items in-game already, these can be converted into digital collectibles and stored on-chain (items, weapons, etc).
##### In Game Currencies
In game currencies are incredibly popular within most games. Blockchain can be used to create both soft and hard currencies with many different flexible mechanics. These can also be specifically modified to guarantee they stay off of exchanges and other “trading” platforms so that they are only usable within a game, on a specific chain, or within a certain subset of users.
> One of the nice parts about using digital collectibles or in-game currencies on blockchain is the automation mechanics available. Ensuring that users are paid out rewards or achievements based on something else is very simple thanks to smart contracts.
You can learn more about deploying collectibles and in-game currencies with [Sequence](https://sequence.skale.space/landing); one of the most popular gaming providers on SKALE.
##### Efficient Analytics
The Ethereum Virtual Machine (EVM) is uniquely designed to be highly efficient at processing and emitting events to many clients in parallel. This can be useful for building programs like leaderboards and lobby systems. The following explores the basics of using the blockchain for analytics and how you can connect that with your game and its players.
There are three ways a blockchain can be used for analytics:
1. The EVM has a specific type of action called an **Event**. Events can take many pieces of information and emit them so that they can be listened to by many different clients. Publishing information through events will allow many players or games to share data with each other
2. Storage in the EVM can be set to public or private by the creator of the program. This means that developers who want to make statistics available to their community for easy access for building modifications, extensions, or DLCs can do so with blockchain. Exposing state through public read only functions will enable others to build smart contracts that can extend and access the information in a safe and secure way.
3. Onchain analytics are also great for achievement systems. Examples include:
* Every N number of kills per player in a FPS automatically mint them some special random assets
* Every N number of some event i.e 1st, 10th, 25th, 50th, 100th, 500th, 1000th, …, Nth of every single event -> mint an achievement that is permanent on chain.
Interested in adding onchain achievements to your game? Contact [Eidolon Labs](https://eidolon.gg/) to work with the experts who built one of the first onchain achievement systems in the form of Untitled Platform.
##### Blockchain for Compute
Blockchains that have many nodes are very unique in that they act as a unique combination of compute and data storage. However, when comparing them to traditional compute types it’s important to understand that it’s a bit of a hybrid design. For example, there are two common compute types seen in cloud computing today: traditional server based compute and serverless/edge compute.
**Traditional Servers**
Traditional servers, often referred to as Virtual Private Servers (VPS), are often considered inefficient for smaller applications that don’t have consistent load over time and ideal for high performance applications and businesses that need consistent compute. SKALE Chains can directly replace traditional server and database requirements thanks to both **read** and **write** capabilities.
This is ideal for applications and games of all sizes, but especially beneficial for those just starting out. No need to worry about infrastructure, maintaining nodes, securing servers, etc; when the SKALE Chain does it for you. Additionally — scalability \[no pun intended] is one of the most common things even veteran teams struggle with. Going from one (1) server to hundreds during short bursts of high capacity is incredibly complicated.
Delegating server based actions to a blockchain can help guarantee that you are set up for success. For example, let’s say you want to probably generate a random map for a PVP match (in the multiplayer line of thought), you can utilize some basic data structures like multiple arrays (i.e a matrix) and SKALE RNG to create as many 10x10 grids as you want and return them. Additionally, thanks to computers and storage being in the same place you can optionally take those and link them to a game with the PVP state.
**Serverless**
There is a newer paradigm that has gained a lot of popularity in the last few years. Serverless is the act of not running an explicit server but having executable and lightweight functions that can run at a moment’s notice as close as possible to the user. The idea behind this is that you don’t pay for compute you don’t use and so it’s considered to be very cost effective for projects just starting out. It’s also highly scalable since the “scaling up” portion of the stack is delegated to the cloud provider instead of managed by you. While this might help with scalability, it's common to hear companies that start with serverless switch to traditional long running servers when they hit a certain amount of usage.
**Blockchain**
Enter blockchain. Technically, it acts as a hybrid compute option that retains the best parts of traditional servers and serverless. By default it is long running and with SKALE’s unique economic model it’s incredibly cost efficient. However, like serverless it runs across many computers by default (i.e. the SKALE Nodes) who can permisionlessley execute the functions deployed to a chain. This by default gives the blockchain capabilities that allow for both short term cost efficiency and long term scalability all in one.
##### Multiplayer
While already discussed throughout many of the above approaches and mechanics; blockchain thrives for real time multiplayer gaming when:
* The blockchain can handle sufficient throughput \[like SKALE]
* The blockchain can handle real time requests
* The blockchain can offer instant finality
* The blockchain can process highly complex game functions
* The blockchain can handle a high number of simultaneous connections
* The blockchain has no fees so that more compute can be done onchain
With SKALE hitting all of the needed boxes, multiplayer mechanics is a great way to utilize blockchain as a tool without having to be crypto or Web3.
##### Blockchain as a Database
In game development, handling large volumes of data efficiently and at scale is crucial. This typically involves using a combination of **databases** (SQL/NoSQL), **caches** (Redis, Memcached), and **file storage systems** (AWS S3, Google Cloud Storage) to manage game data such as player information, game states, asset storage, and user interactions. However, blockchain technology — especially a blockchain with zero gas fees, huge compute limits (268 million block gas limit), and larger contract sizes (64 KB) — can offer a more integrated, secure, and decentralized alternative to these conventional systems.
Traditionally, databases are used to store game data such as player profiles, game progress, statistics, inventory, and in-game assets. However, blockchain can act as an immutable, decentralized database for these types of data, providing key benefits:
* **Immutability and Security:**
Data stored on the blockchain can be made immutable, meaning it cannot be altered once recorded. This is ideal for critical game data such as player achievements, transaction records, or inventory items. By using blockchain for this purpose, developers can ensure data integrity and transparency without worrying about data tampering or corruption.
* **Decentralized Data Ownership:**
In traditional database systems, data is stored on centralized servers owned by a third party, creating a potential vulnerability. Blockchain, on the other hand, distributes data across a network of nodes, ensuring that players themselves have control over their data. This is particularly important in asset-based games or games that involve unique digital items or currencies (e.g., NFTs), where players want real ownership.
* **Scalability and High-Volume Data Handling:**
With a \~268 million block gas limit and zero gas fees, SKALE is capable of handling the massive data throughput required by modern games. This allows for the storage of millions of player profiles, inventory data, game stats, and other dynamic data without performance degradation.
> **Example:** In an MMO, players’ inventory data, equipped items, and character stats can be stored on the blockchain. Each player would have a decentralized, immutable record of their assets and progress that could be accessed and verified without reliance on centralized servers.
##### Blockchain as a Cache
In traditional game architecture, caches are used to store frequently accessed data (e.g., player profiles, game state, leaderboard rankings) to speed up retrieval times. With blockchain, especially one with zero gas fees like SKALE, this need can be eliminated:
* **Instant Access to Onchain Data:**
Data can be retrieved directly from the blockchain without needing an intermediary caching layer. Since SKALE operates with zero gas fees, developers don’t have to worry about the transaction costs typically associated with writing to the storage layer and reads are always free. Players can access data in real time without the latency or costs of traditional caching systems. *Additionally, the use of blockchain sync nodes placed strategically around the world can greatly reduce latency for gamers.*
* **No More Cache Invalidation Issues:**
Traditional caches have to handle cache invalidation (ensuring outdated data is refreshed), which can be complex and error-prone. SKALE data is **always up-to-date thanks to instant finality**, and since every transaction or update is public and verified on the chain, there is no need for additional systems to ensure data freshness.
* **Reduced Need for Expensive Caching Services:**
As the need for complex caching systems is removed, developers can save on infrastructure costs. The blockchain itself serves as a dynamic, high-performance store for frequently accessed game data.
> Example: In an FPS or real-time strategy game, player stats and leaderboard rankings can be stored on-chain and accessed in real time without the need for separate caching infrastructure. The blockchain’s low-cost, high-speed retrieval replaces the need for a dedicated caching system like Redis or Memcached.
##### Blockchain for File Storage, Replication, and Availability
Games often require file storage for assets such as textures, 3D models, animations, and other game data. SKALE offers two ways to store assets directly onchain.
1. **Smart Contract Storage**: while this is technically doable on ALL blockchains; low block gas limits, small contract sizes, and variable gas fees can make this both difficult and costly. SKALE eliminates those barriers allowing text-based files like JSON to be stored directly onchain for free.
2. **Native Filestorage**: a feature native only to SKALE is SKALE Filestorage. SKALE Chains upon creation can optionally allocate some portion of their nodes to filestorage. This allows files to be uploaded to the chain and replicated across all the nodes and served directly from the blockchain.
The following explores in greater depth multiple ways that SKALE can be used to store and serve files.
* **Smart Contracts as Asset Containers:**
Instead of relying on cloud storage solutions like AWS S3 to hold game assets, developers can store these assets directly within smart contracts. SKALE’s large contract size (64 KB) allows for storing more data on-chain, making it a viable option for smaller game assets. For example, textures or small game models can be encoded into the blockchain, ensuring transparency and verifiability. Additionally, many files can be dynamically manipulated on chain so that they can be used in conjunction with other smart contracts to manipulate their data.
* **Cost-Effective File Storage:**
With zero gas fees, SKALE makes storing and accessing game assets on-chain more affordable compared to traditional cloud storage models, where developers are often charged based on the amount of data stored and the frequency of access.
* **Decentralized CDN:**
Traditional CDNs are one of the most solutions developers use to speed up their applications. While boasting incredible speed; CDNs can be very costly. SKALE allows for decentralized file access and availability, reducing the risk of data loss, tampering, or centralization while also enabling CDN capabilities that are pre-paid \[inclusive of egress charges]
> **Example:** In a collectible card game (CCG), each card could be an asset stored on the blockchain. Instead of storing card images and metadata on external servers, they can be securely stored within the blockchain itself. The card’s metadata (e.g., stats, images, abilities) could be embedded in the blockchain, ensuring transparency, ownership, and ease of access.
##### Summary
If you read this entire article whether in one sitting or many, thank you for spending the time. I hope you learned something valuable and most importantly recognize that the right technology means you don’t have to be a “Web3” game or launch a token to be a blockchain game. There are many amazing ways to use blockchain and more specifically SKALE to level up your game.
For game developers interested in adding blockchain mechanics to your game, head over to the SKALE Indie Game Accelerator at [https://skale.space/skale-indie-games-accelerator](https://skale.space/skale-indie-games-accelerator) to learn more.
import Footer from '../../snippets/_footer.mdx'
## The Gasless Design Behind x402
This article explores the gasless design behind x402, a protocol for internet-native payments that enables seamless transactions across any web service without the need for API keys or accounts.
[x402](https://x402.org) leverages the existing [HTTP 402](https://docs.cdp.coinbase.com/x402/core-concepts/http-402) "Payment Required" status code, which indicates that a payment is necessary to access a resource.
Today, the primary use of x402 is through stablecoins, primarily [USDC](https://usdc.com), which allows payments to move at blockchain speed instead of through traditional financial institutions.\
One key component of USDC is the use of [EIP-3009](https://eips.ethereum.org/EIPS/eip-3009), which enables the transfer of fungible assets through a signed authorization.
This article explores **Transfer with Authorization**, forwarding patterns for existing blockchains, new design opportunities for chains like [SKALE on Base](https://docs.skale.space/welcome/skale-on-base/), and why not all blockchains are created equal when it comes to meta-transaction patterns.
### What is EIP-3009?
For those unfamiliar, EIP stands for Ethereum Improvement Proposal. EIPs are a way for the Ethereum community to propose and discuss new ideas for the protocol. EIP-3009 defines a method to transfer fungible assets through signed authorizations that conform to the [EIP-712](https://eips.ethereum.org/EIPS/eip-712) typed message signing specification.
This enables a user to:
* Delegate the execution and payment of gas fees to another party
* Cover gas fees in the token being authorized for transfer
* Perform a series of actions in a single atomic transaction
* Enable the receiver of a transfer to execute the transfer
* Create simplified batching mechanics
Additionally, one of the key benefits of EIP-3009 is that it **does not** require sequential nonces, making it far simpler to implement and process actions on behalf of a user without worrying about transaction ordering.
| Feature | EIP-3009 | EIP-2612 |
| ----------------------------- | -------- | -------- |
| Sequential Nonces | No | Yes |
| Pre-Approval (approve/permit) | No | Yes |
| Simple Authorization Flow | Yes | No |
### EIP-3009 and x402
The on-chain transfer portion of x402, in its first version, is built around the use of EIP-3009. While only a few tokens natively support EIP-3009, such as USDC and EURC from Circle and AUSD from Agora, the pattern lends itself well to a "permit-to-use" off-chain approach.
The flow of an x402 payment is as follows:
**Alice**: the user or AI agent, the buyer\
**Bob**: the web service or another agent, the seller\
**Carol**: the facilitator responsible for verifying and settling the payment
1. **Alice** requests a resource from a web service or another agent.
2. **Bob** returns a `402 Payment Required` response that includes a list of accepted payment options.
3. **Alice** chooses to pay in ERC-3009 compliant AxiosUSD on SKALE Base Chain and signs an authorization using EIP-712 for $0.01. **Bob** requests the resource again, including the `X-Payment` header with his signature base64 encoded.
:::note
This can be done through a Web3 library like [Viem](https://viem.sh), an invisible wallet like [Privy](https://privy.io), a Web3 wallet like [MetaMask](https://metamask.io), or a custodial wallet such as [Coinbase Developer Platform Server Wallets](https://www.coinbase.com/developer-platform/products/wallets).
:::
4. **Bob** checks for the authorization on every request and, when found, contacts **Carol** to `/verify`.
5. **Carol** verifies the payment authorization against the payment scheme and network and returns a verification response to **Bob**.
6. **Bob** receives the verification response and begins the creation/inference process. If using **Carol** to help settle, **Bob** also tells **Carol** to `/settle`.
7. **Carol** settles the payment on-chain by executing the transfer on behalf of **Bob** and responds with a payment execution response.
8. **Bob** receives the response from **Carol** and responds with the `X-Payment-Response`.
:::note
While this may seem complicated, most of the work is actually done by the facilitator (**Carol**), who handles the majority of on-chain operations.
:::
### x402 Across Blockchains
The goal of an open protocol like x402 is to foster adoption and interoperability. The Ethereum Virtual Machine (EVM) version uses EIP-3009 and is extendable across today's ecosystem of many EVM blockchains.\
A Solana implementation also exists and will be explored in a future article.
It is important to understand the ability to use x402 across blockchains and the ways it can be enabled:
#### Native ERC-3009 Implementation
This is the default and preferred method to implement x402, identical to the [Bridged ERC-3009 Implementation](#bridged-erc-3009-implementation) discussed below.\
This is how USDC, EURC, and AUSD are implemented on Base and other EVM chains today. While not all tokens are natively ERC-3009 compatible, existing tokens can upgrade if the pattern is supported and the issuer allows it.
For tokens that cannot or prefer not to upgrade, the options are ERC-3009 Forwarding or Bridged ERC-3009 Implementation (preferred).
#### ERC-3009 Forwarding
ERC-3009 Forwarding is a pattern that existed conceptually but had limited implementation until I tested x402 on FAIR Testnet in late September.\
For an example, see my [EIP-3009 Forwarder](https://github.com/TheGreatAxios/eip3009-forwarder) contract.
:::note
This enables x402 for any token on any EVM blockchain, but it requires user approval for the forwarding contract (an allowance) so it can spend tokens on the user's behalf. While not ideal, it allows x402 to work with minimal server and facilitator changes. With account abstraction becoming more prevalent, this limitation may lessen over time.
:::
#### Bridged ERC-3009 Implementation
The preferred way to expand x402 support is to create ERC-3009 compatible tokens on the target chain. The following guidelines generally work for L2s/L3s/AppChains. However, if a chain lacks a technical moat compared to its parent chain or is primarily parasitic, this approach may not make sense.
**Requirements**:
* A secure, fully programmable bridge
* An ERC-3009 compatible token on the new chain that supports the bridge
**Recommended chain characteristics**:
* A liquidity-based bridge enabling TVL accrual for apps
* Programmable bridge functionality for AI agents to interact with the bridge and tokens
* ERC-3009 compatible bridge hooks to settle x402 transactions directly from the parent chain
* A technical advantage over the parent chain (e.g., zero gas fees, instant finality)
SKALE Network is ideal for this. See setup and implementations [here](https://github.com/TheGreatAxios/skale-chain-setup) for SKALE Base Sepolia.\
This setup allows a SKALE chain to natively support x402 for any token bridged from Base.
### Gas Blockchains, Gasless Flows, and the Facilitator
Most blockchains today are gas-based, requiring fees to execute transactions. This conceptually aligns with x402, which allows a single transaction instead of pre-paying for resources. However, many blockchains struggle with transaction spikes, leading to highly variable fees.
Various meta-transaction patterns and account abstraction proposals exist (EIP-3009, EIP-2612, EIP-4337, EIP-7702), but all share the same core problem: **someone must pay the gas fees**.
In x402, verification and settlement are often delegated to a facilitator, offloading complexity and gas fees. However, this makes the facilitator responsible for paying the gas fees for executed transactions.\
As usage grows, this can become a bottleneck unless the facilitator runs their own blockchain and profits from transaction execution, as seen with Coinbase/Base/USDC. This alignment creates a win-win-win-win scenario for clients, servers, facilitators, and blockchains.
#### The Solution is Truly Gasless
SKALE Network recently announced [SKALE Expand](https://forum.skale.network/t/skale-growth-manifesto/726?u=thegreataxios), a growth initiative enabling SKALE Manager to deploy its app-like design to other blockchains beyond Ethereum.\
This allows truly gasless x402 flows on other chains while solving finality/rollback problems across L1s/L2s/L3s and appchains.
SKALE Chains are self-contained EVM-compatible blockchains with high-performance C++ EVM implementations, scalable node architecture, and instant finality.
:::note
SKALE Expand brings truly gasless x402 to any blockchain ecosystem. The native IMA bridge is one of the fastest liquidity bridges globally, secured by the SKALE Chain consensus. For example, deploying on Base turns SKALE into an app that accrues TVL and supports new EIP-3009 tokens while remaining gasless. Other ecosystems could request SKALE deployment to achieve the same setup.
:::
### Conclusion
x402 is powerful, and I have been building on it for a couple of months now. Combining x402 with ERC-8004 trustless agents on a blockchain designed for the machine economy presents exciting possibilities.
Expanding that further into the broader world of agentic systems and the machine economy I think there is a massive opportunity to bring many businesses and people onchain.
Reach out if you are building on x402 or ERC-8004 and want to collaborate, share ideas, or just get some feedback on your project.
import Footer from '../../snippets/_footer.mdx'
## The Hidden Cost of Agentic Commerce
As the agentic economy emerges and AI agents begin transacting at scale, the infrastructure we choose for these economic interactions will determine whether machine-to-machine commerce thrives or stalls. While the conversation often centers on payment protocols like [x402](/blog/the-role-of-pay-per-tool-in-an-agentic-world.mdx), there's a more fundamental question that gets less attention: **What is the actual cost of enabling billions of agentic transactions?**
There is a hidden cost today to agentic commerce running onchain with x402 rails. What is it? Variable Costs.
:::note
The hidden cost isn't the payment protocol or the integration complexity—it's the unpredictability of transaction costs at scale. When agents transact billions of times daily, variable gas costs and surge pricing make economic models impossible to predict.
:::
There are always complaints about how traditional PSPs are so expensive i.e. the 2.x% + some amount of cents (e.g $0.3) which can result in many software sellers and physical merchants putting minimums on payments with a credit card, such as $5.
| | Traditional PSP | EVM Chains | Solana | SKALE |
| ------------------ | --------------------- | --------------------------- | ----------------------------------- | ---------------------------- |
| **Base Cost** | 2.x% + $0.30 per tx | Gas token value × gas used | Priority fee × compute units | Pre-paid credits (fixed) |
| **Predictability** | Fixed per transaction | Variable (token + capacity) | Variable (local fee markets) | Fixed until expansion needed |
| **Under Load** | Same cost | Global surge pricing | Local surge pricing on hot accounts | Add capacity, no surge |
| **Finality** | Days (settlement) | Minutes to hours | \~12-15 seconds | \~1 second |
| **Best For** | Large transactions | Occasional use | High-frequency single-app | Agent-to-agent scale |
Blockchains historically have used a valuable gas token that has brought two levels of variability toward the cost of consuming block space:
#### 1. Asset Value Variability
The price of the gas token goes up, it just became more expensive for all parties to participate.
#### 2. Capacity-Based Surge Pricing
The capacity of the chain itself leads to surge in pricing as the chain protects itself against overuse by raising the gas price. This in turn makes it harder or more expensive to land transactions and generally has worked as a deterrent where most non-power users go elsewhere or trickle down to cheaper blockchains (historically).
### Why SKALE Makes Sense for x402 Merchants
Why does SKALE make so much more sense for merchants and facilitators as the primary location to accept x402 payments in stablecoins and cryptocurrency?
#### The Credit System: Predictable Costs
SKALE Base, the first chain in SKALE's new expand program, uses a credit system that is backward compatible with EVM gas fees. This solves No. 1 from above by reducing variability in acquisition of compute resources. This aligns with how more traditional cloud providers rent resources in exchange for a flat fee. If 100,000,000 developers turn around and want to run OpenClaw bots on AWS t3.medium EC2 instances; I personally have never seen AWS surge price their instances.
#### Surge Protection Without the Pain
The second point is more nuanced. SKALE has the same protection that other blockchains do in regards to surge pricing. The differences?
**a. Higher Compute Thresholds**
High performance chains with massive compute limits (compared to traditional EVM) are capable of handling a much higher load before seeing the protective increase. This makes it easier to avoid small, choppy surges that can be painful on your wallet.
**b. Horizontal Scalability**
The part that is truly powerful is the horizontal scalability. This is how cloud giants maintain (and often are actually able to make prices cheaper over time). The ability to continuously supply more and more compute means there is no cap when demand starts to grow; like we often see with other chains.
### How does this work?
SKALE is natively multi-chain through an application called SKALE Manager which is a set of smart contracts deployed on Ethereum (SKALE Ethereum) and Coinbase's Base L2 (SKALE Base). The smart contracts have a pool of validator Supernodes registered to them which in turn are used to dynamically provision SKALE Chains when the smart contracts ask for them.
With the new credit system, it's possible that as the agentic economy begins to grow agents themselves will buy SKALE Chains directly and continue to add support for more chains to keep their costs to a minimum and maximize profit.
### The Economics of Expansion
This model isn't unique to blockchain infrastructure—it's how successful businesses scale in the physical world as well.
#### Physical Retail Expansion
Consider a grocery store that reaches capacity during peak hours. They don't surge price their milk when the checkout lines get long. Instead, they open more registers, expand their footprint, or open new locations. The upfront cost of expansion pays dividends in the form of captured demand that would otherwise go elsewhere.
The store that expands when demand rises captures the growing market. The store that stays fixed-size loses customers to competitors who can serve them.
#### Cloud Provider Economics
The same logic powers AWS, Cloudflare, and other cloud giants. When demand spikes, they don't just raise prices—they provision more servers, build new data centers, and expand capacity. This allows them to:
* Capture growing demand rather than turning customers away
* Economies of scale that often let them *lower* prices over time
* Provide predictable pricing that attracts long-term customers
#### SKALE's Expansion Model
SKALE operates on this same principle. When agentic transaction volume grows, the network can provision additional SKALE Chains to handle the load—without surge pricing that would drive users away.
For merchants and facilitators building on SKALE, this means predictable costs at scale. The infrastructure grows *with* your business, not against it.
### Conclusion
The agentic economy will be built on infrastructure that can handle billions of transactions predictably. Variable costs—whether from gas token volatility or capacity-based surge pricing—are a hidden tax on agent-to-agent commerce that makes economic modeling nearly impossible.
SKALE's credit system and horizontal scalability offer a path forward: predictable costs that scale with demand, not against it. Like cloud providers and successful retailers, the winning strategy is expansion, not extraction.
As I explored in [The Role of Pay-Per-Tool in an Agentic World](/blog/the-role-of-pay-per-tool-in-an-agentic-world.mdx), the future of agentic systems depends on infrastructure that can scale sustainably. SKALE provides that foundation for x402 payments.
x402 on SKALE, DM me and I'll help you integrate.
## The Power of Random
This article explores SKALE Network's native random number generation system that uses threshold signatures from consensus nodes to create provably random numbers at zero gas cost. Unlike external oracles like Chainlink VRF, SKALE's RNG is free, synchronous, and built directly into the blockchain, enabling developers to generate unique NFT attributes and implement innovative game mechanics like Shape Storm's single-ownership roguelike where players can only hold one randomly-generated shape at a time.
[Shape Storm by Eidolon](https://shapestorm.eidolon.gg/) is a rougelike that uses the blockchain for optional ownership and analytics. As rougelikes are heavily on randomization it was a natural fit to explore using the blockchain for provable randomness. SKALE offers a native random number generation endpoint that allowed the Eidolon team to take Shape Storm to a whole new level by having all the core attributes from players shapes be randomly generated on-chain and stored as as a playable NFT. This also lends itself to a future exploration of survival mechanics with upgradeable random values.
Additionally, the unique random values lends itself to the single-ownership system where a user can only own a single NFT from Shape Storm at a time. They can choose to send it elsewhere or remove it, but if they get rid of their current shape there is no guarantee the next one will be better.
Read on for a deep dive into RNG on SKALE and the implementation within the Shape Storm smart contract.
***
#### SKALE RNG
Every [SKALE](https://skale.space/) Chain has a random number generator contract pre-compiled and available. A new random number is generated for every block based on the threshold signature of that block. As SKALE Chain consensus is asynchronous and leaderless the blocks must be signed by at least 11/16 nodes \[on the SKALE Chain]. The signature from each node is glued together so that single node can influence the resulting signature. This process ensures that the results cannot be manipulated by a single entity.
The process for actually attaining the random number generation looks like this:
1. The signatures for the current block are retrieved
2. The BLAKE3 hash of the signatures is created
3. The resulting hex RNG is presented and consumable in Solidity
As it is available through a pre-compiled contract on every chain, a major advantage of this compared to a 3rd party RNG generator like [Chainlink’s](https://chain.link/) VRF is that the random number is directly available as a read and does not need to be set/shared/consumed in a callback or require additional payment. It’s free as gas fees on SKALE are 100% free!
> *A quick reminder that SKALE RNG only works on SKALE.*
The function in Solidity looks like this:
```solidity
// Reminder, this is Solidity (.sol)
// SPDX-License-Identifer: MIT
pragma solidity ^0.8.13;
contract A {
function getRandom() public view returns (bytes32 addr) {
assembly {
let freemem := mload(0x40)
let start_addr := add(freemem, 0)
if iszero(staticcall(gas(), 0x18, 0, 0, start_addr, 32)) {
invalid()
}
addr := mload(freemem)
}
}
}
```
#### RNG Package
The above code, while simple enough to use for a single random number, requires some additional work to generate **many** random numbers in a single function. To make this easy to consume, [Dirt Road Development](https://dirtroad.dev/) has created a utility package called [skale-rng](https://docs.dirtroad.dev/skale/skale-rng). This NPM package can be added to your codebase and offers a number of pre-built utilities to quickly iterate on the RNG value to grab many random numbers. It also helps with selecting and maintaining ranges for the random numbers.
#### Shape Storm & RNG
In the code below you will notice a few things:
1. The first use of random number value is **getRandomRange(4)** where the value is then ternary checked to ensure that it is never 0. This is because with 0 being the default “empty” value in the EVM it made more sense to start at 1 for this array. Based on that the number is expected to be between 1–4.
2. After this you will notice the next function used is **getNextRandomRange(X, Y).** This function was chosen to ensure that the one random number in the block could be \[bitwise] operated on and re-hashed to generate more random numbers. The X value can be any number that some bitwise action will occur on in relation to the original rng value in order to generate a new integer which can be hashed and re-cast to a uint256 in order to give us a new random number. The Y is the maximum value in-range (inclusive). This function is used over and over to generate a whole bunch of random numbers — at no cost — all in one shot.
The end result of this is that every shape is represented as a unique NFT in ERC-721 format!
```solidity
// Reminder, this is Solidity (.sol)
// SPDX-License-Identifer: MIT
pragma solidity ^0.8.13;
import "@dirtroad/skale-rng/contracts/RNG.sol";
contract ShapeStorm is RNG {
function _mint(address to) internal {
uint8 rng = uint8(getRandomRange(4));
uint8 shapeNumber = rng == 0 ? 1 : rng;
if(currentTokenId + 1 > maxTokenSupply) revert MaxSupplyReached();
ShapeStats memory baseShapeStats = baseStats[shapeNumber];
uint8 rotateSpeed = _validateNumber(MINIMUM_ROTATE_SPEED, uint8(getNextRandomRange(3, baseShapeStats.rotateSpeed)));
uint8 maxSpeed = _validateNumber(MINIMUM_MOVEMENT_SPEED, uint8(getNextRandomRange(4, baseShapeStats.maxSpeed)));
uint8 dashSpeed = _validateNumber(DASH_SPEED_BOOST, uint8(getNextRandomRange(5, baseShapeStats.dashSpeed)));
uint8 bulletDamage = _validateNumber(BULLET_DAMAGE, uint8(getNextRandomRange(6, baseShapeStats.bulletDamage)));
uint8 shootCooldown = _validateNumber(SHOOT_COOLDOWN, uint8(getNextRandomRange(7, baseShapeStats.shootCooldown)));
uint8 shieldCapacity = _validateNumber(SHIELD_CAPACITY, uint8(getNextRandomRange(8, baseShapeStats.shieldCapacity)));
uint256 newTokenId = currentTokenId++;
tokenStats[newTokenId] = ShapeStats(baseShapeStats.shape, rotateSpeed, maxSpeed, dashSpeed, bulletDamage, shootCooldown, shieldCapacity);
_safeMint(to, newTokenId);
}
}
```
import Footer from '../../snippets/_footer.mdx'
## The Rise of the Machine Economy
This article examines how blockchain infrastructure, particularly SKALE Network's zero-gas, high-performance platform, will enable the emergence of a machine-driven economy powered by autonomous AI agents. By combining technologies like x402 for programmable payments, small language models for efficient AI processing, and decentralized identifiers for verifiable interactions, we can create seamless workflows where machines transact, collaborate, and execute economic activities without human intervention.
### What is SKALE?
SKALE is a network of Ethereum-compatible blockchains designed for speed, efficiency, and scale. It offers zero gas fees, native multi-chain functionality, and a high-performance Ethereum Virtual Machine (EVM) implementation built in C++.
What makes SKALE unique is the range of features built directly into the network. Developers get onchain random number generation (RNG), a native oracle, a fully decentralized bridge connecting Ethereum and SKALE Chains, and onchain file storage—all without relying on external services. These features make SKALE a true decentralized cloud for compute and storage.
On top of that, SKALE delivers the only single-slot finality EVM in production today. Consensus is mathematically provable, fully asynchronous, and leaderless, allowing transactions to finalize in about a second with strong security guarantees. This combination of speed, scale, and native capabilities sets SKALE apart as one of the most advanced blockchain platforms available today.
**ELI5 Analogy:** Imagine Ethereum (and most L1s and L2s) are like busy cities with one main highway. SKALE builds an entire network of highways that are just as safe but much faster, and every car on them gets free gas. Not only that, each highway comes with built-in tools like storage garages, toll-free bridges, and even random dice rollers for games. It's like giving blockchain apps their own superhighway to run smoothly without traffic jams.
#### A Focus on Compute
SKALE set its sights on being home to **high-performance, compute-intensive decentralized applications**, especially in areas like onchain gaming, DePIN, and real-time data processing.
The screenshot below is from [dAppRadar](https://dappradar.com)-- one of the leading data and analytics platforms in the blockchain space -- on 8/22/25 and shows that 3 of the top 5 games in blockchain are on SKALE. If you double click into each you will see that all three do a significant amount if not the majority of their compute on SKALE.

The architecture—featuring customizable, zero-gas chains with high throughput and low latency—makes it the go-to blockchain platform for apps that literally live in the world of millions of transactions. The above dApps are built across the [Nebula](https://portal.skale.space/chains/nebula) and [Calypso](https://portal.skale.space/chains/calyspo) SKALE Hub Chains.
When people talk about SKALE being built for heavy workloads, the best example is **Exorde**. Exorde is a decentralized data and sentiment-analysis protocol that depends on millions of transactions every single day. Contributors across the world continuously crawl tens of millions of URLs and submit data to the chain, which translates into **over 2 million daily transactions** and nearly **1,000,000,000 (billion) total transactions** onchain so far. On any gas-metered chain this level of activity would cost hundreds of millions of dollars, making the model completely unsustainable. On SKALE, those same transactions are processed at **zero gas cost to users**.
This isn't just a matter of being "cheaper." Without SKALE's zero-gas model, [Exorde](https://exorde.network) simply could not exist in a decentralized way. Running millions of writes per day would be financially impossible on Ethereum mainnet or even on most L2s, where gas adds up fast. What SKALE provides is effectively a decentralized compute cloud that can handle workloads which are normally reserved for centralized servers. It shows that SKALE's architecture isn't just optimized for high throughput—it unlocks entirely new categories of applications that only make sense when gas costs are removed from the equation.
#### The Blockchain Wars
Over the past 6-12 months the blockchain wars (yes I'm calling it that) have really been heating up. The number of blockchains that have either been announced or launched is continuing to increase. In addition, there are more [Rollup-as-a-Service (RaaS)](https://www.alchemy.com/overviews/what-are-rollups-as-a-service-appchains) providers and more application-chain networks being spun out, including [Base Appchains](https://www.coinbase.com/developer-platform/discover/launches/base-appchains).
Time for some opinions. These are my personal opinions and do not reflect the stance of any of the companies I am contracted by.
**#1 - Most blockchains will die. Most tokens will die. Those that will succeed need differentiation. Speed is not differentiation.**
**#2 - Layer 2s will continue to be cannibalized by Base and Ethereum scaling the L1 will kick-off a max extinction event for L2s**
**#3 - Big Layer 1s that are just forks of Geth, have slow consensus, and are racing to the bottom for cheaper fees will all die.**
A summary of my opinions and why they matter. The majority of blockchains you see today have no usage. Appchains on average have even less usage with many going unused for weeks, months, or even years at a time.
#### Who will buy Appchains?
I think that the biggest buyers of application chains over the next 5 years will be large corporations and governments. With compute becoming cheaper, it's feasible for a company to have a fleet of blockchains doing different things, in different locations, with different ownership structures, and different access points.
A fantastic read from Toyota, [this research report](https://www.toyota-blockchain-lab.org/library/mon-orchestrating-trust-into-mobility-ecosystems) dives into how they are planning to use multiple Avalanche subnets to coordinate identity, information, payments, and data. While they are choosing to use Avalanche for their Proof-of-Concept they call out the following as important:
"We chose Avalanche because its design centered on multiple L1s (formerly Subnets), its fast finality, and its native ICM align with MON's philosophy of *building locally, collaborating globally.*" -- [Toyota Blockchain Lab](https://www.toyota-blockchain-lab.org/library/mon-orchestrating-trust-into-mobility-ecosystems#:~\:text=We%20chose%20Avalanche%20because%20its%20design%20centered%20on%20multiple%20L1s%20\(formerly%20Subnets\)%2C%20its%20fast%20finality%2C%20and%20its%20native%20ICM%20align%20with%20MON's%20philosophy%20of%20building%20locally%2C%20collaborating%20globally.)
Based on the above, SKALE is and will remain a top contender thanks to being the only multichain network with instant finality and zero gas fees that also has sustainable mechanics.
### Collaborative Technologies
#### What is x402?
[x402](https://www.x402.org) revives the HTTP 402 "Payment Required" status and turns it into a painless, real-time payments system using stablecoins. It was introduced by Coinbase to enable [internet-native payments](https://www.coinbase.com/developer-platform/discover/launches/x402). It allows APIs, agents, and applications to transact without juggling API keys or subscriptions. Think of it as embedding payments directly into the web with zero friction—no fees, instant settlement, and blockchain-agnostic at its core.
If you want to dive deeper, check out this [research paper on multi-agent economies](https://arxiv.org/abs/2507.19550). It explores how autonomous agents can use X402 for discovery and payments, enabling seamless HTTP-based micropayments backed by blockchain.
#### What are Small Language Models (SLMs)?
When I talk about SLMs, I'm talking about the smaller, lighter versions of big language models. They're compact, fast, and cheap to run, but still surprisingly capable. Because they don't need massive cloud compute, they're great for things like edge devices, personal assistants, or any use case where privacy matters. They're basically a practical way to get a lot of AI power without the huge overhead.
They retain the general capabilities of LLMs. For this reason, it makes sense that as individuals and companies look to gatekeep resources, APIs, MCP-access, agent-access in order to either profit or at a minimum ensure they are covering costs; SLMs could be a huge unlock offchain.
I do believe there is the potential to explore running SLMs on SKALE Chains directly as well, but I'll cover that in a different write-up.
#### What is DID?
Decentralized Identifiers ([DIDs](https://w3c-ccg.github.io/did-primer/)) are, I think, a missing piece of the puzzle to connecting blockchain to the broader internet. Years ago when I was building Lilius, it was one of the areas we were doing heavy research and exploring to help bolster user identity.
There has been a significant amount of research and growth in this area:
* [https://ethereum.org/en/decentralized-identity/](https://ethereum.org/en/decentralized-identity/)
* [https://github.com/decentralized-identity/ethr-did-resolver](https://github.com/decentralized-identity/ethr-did-resolver)
* [https://github.com/uport-project/ethr-did-registry](https://github.com/uport-project/ethr-did-registry)
One of the outstanding questions with both x402 and agentic collaboration is identity and proving. With SKALE's zero gas fee nature, storing DIDs onchain could be done in some cases for free and in others for flat rates (to avoid DoS attacks).
### A Machine-compatible Future
This section will read a bit like a movie. Let's start with the ending.
**Blockchains are built for machines, not people. A decentralized and easy UX that isn't bad is nearly impossible to come by and not feasible for the average human being.**
The broader positioning of blockchain is very interesting. Over the years the "pitch" to the average user has combined ideas like **own your own money** and **be your own bank** to **blockchain is the next version of the internet, i.e Web3**.
I think the latter has always been the area that was more interesting to me. Building applications for the traditional *Web2* world can at times be very frustrating due to the large number of hoops a developer has to jump through to access information, payment rails, etc. Additionally, many things in Web2 that a developer needs to make a successful application have a high cost of either entry or use.
The perfect example is payment processing, which often times charges 2.3% + $0.30 minimum to process transactions or private application stores like Apple, which have historically charged 15-30% on transactions.
The promise of blockchain to a developer is the ability to side-step many of these hidden fees and linear costs in favor of something more equitable to both you and your users.
#### The Pitfalls
Blockchains are built for machines, not humans, and that creates some real headaches for developers and users alike.
1. **UX Friction** – Wallets, gas fees, confirmations, and failed transactions make even simple interactions frustrating. Humans usually just want to click a button and see instant results, but blockchains make this difficult.
2. **Cost Barriers** – Transaction fees and the overhead of smart contract execution can make small-scale applications prohibitively expensive. Even simple micropayments or automated interactions become costly if you're relying on general-purpose blockchains with variable gas fees.
> FAIR is not grouped in the general purpose category directly for me in the sense of cost barriers because of its inherent differentiation with Proof-of-Encryption. I'm willing to pay a premium for enhanced security on a general-purpose L1.
3. **Complexity in Automation** – If you want agents or APIs to act autonomously, you quickly run into problems. Without verifiable credentials, you have no proof that actions were executed by the right system. Without programmable money, you have no way to allocate funds to machines or automate workflows without constant human oversight, considering that traditional payment rails cannot settle instantly and often charge dozens to hundreds of basis points per transaction.
4. **Security Risks** – Autonomous systems can behave unpredictably or "hallucinate" in edge cases. Without immutable guardrails, you risk agents misallocating funds or performing unintended actions.
5. **Slow Interoperability** – Moving value or data between chains or off-chain systems can be slow and expensive, making it hard to scale applications that rely on multiple networks or financial platforms.
#### My Blue Sky Future
Now imagine solving these problems with a stack of modern tools and a little architectural elegance:
1. **Seamless UX for Humans and Machines** – Humans continue to interact via a simple browser or app interface, while machines (agents, APIs, MCP servers) interface directly with **SKALE** and **FAIR**. Humans never see the complexity, but autonomous agents can execute actions, settle payments, and respond in real-time.
2. **Verifiable Actions via DIDs** – Each agent or API can carry a **Decentralized Identifier (DID)** with verifiable credentials. This proves that every action—whether a payment, API call, or task completion—was executed correctly and securely, creating trust without intermediaries.
3. **Tokenized Workflows with x402** – With **x402**, payments and tokens flow seamlessly between humans and machines and machine-to-machine. Onramps, exchanges, and an expanding variety of stablecoins allow unique allocation strategies: your AI agent can earn, hold, and spend money autonomously while staying under human-enforced rules.
4. **Immutable Guardrails on SKALE** – Smart contracts on SKALE can enforce spending limits or rules for machines (agents, APIs), preventing accidental "hallucinations" or misallocations. APIs and traditional servers can automatically receive payments, then dynamically route funds back to agents or financial applications. The reason SKALE shines here is **instant finality and zero gas fees**, letting agents operate continuously without bottlenecks.
5. **Expanded Financial Access with FAIR** – Suppose your agent or MCP server earned $1,000 today through x402 payments. With FAIR L1 integration, those funds can instantly be deposited into a decentralized AMM, lending platform, or other financial service—turning autonomous work into real, deployable capital in real time.
This is the vision of a **machine-compatible future**: humans enjoy smooth experiences, agents act autonomously but verifiably, and money and data flow instantly and securely across decentralized networks. By combining DIDs, SLMs, x402, [SKALE](https://skale.space), [FAIR](https://fairchain.ai), and MCP servers, we can finally build applications where humans, AI, and financial systems interact seamlessly—without friction or unnecessary intermediaries.
### Conclusion: Appchains for the Machine Economy
This brings us to the inevitable conclusion. The future of blockchain is not a single, congested superhighway but a sprawling, interconnected network of specialized application chains. As the internet evolves into a truly machine compatible ecosystem, the demands on this infrastructure will be relentless. Autonomous agents running on SLMs will need to transact millions of times a day, verified by DIDs and settled instantly via protocols like x402. For these systems, gas fees are not just a cost; they are a critical point of failure.
This is where SKALE's architecture transitions from a competitive advantage to a fundamental necessity. Its zero gas, instant finality model is not merely a feature, it is the native habitat for high frequency, compute intensive applications. The very multi chain design that a company like Toyota seeks for its complex data and mobility ecosystems is the core principle SKALE has already perfected.
As enterprises and developers move beyond speculation and begin building the high throughput applications of tomorrow, they will not be looking for the cheapest chain, but the only one where their business model is economically viable. SKALE is not just a contender in the appchain race; it is the logical endgame. It is the decentralized cloud where the machine economy will finally be built.
import Footer from '../../snippets/_footer.mdx'
## The Role of Pay-Per-Tool in an Agentic World
The role of agents and AI tooling is expanding very quickly. With
new releases, tools, models, and innovations coming out every day, it's important to understand the role of tools in agentic systems and why the consumption model may be in need of an economic change. This blog introduces the concept of a tool, walks through an example of a tool, and explores why pay-per-tool is the next logical step.
### What is a tool?
Tools are software designed to perform a specific task that are designed to be called by language models. A tool can provide access to complex programming logic, external APIs, or even other models.
One of the more common tools that is used across many agentic applications is the ability to search the web. With models being trained on specific sets of data, they tend to have a *cutoff date*, which is when the information used to train the model was last updated.
This means that for an LLM to have access to the most recent information, it needs to be able to crawl the web and retrieve up-to-date info.
Looking at [OpenAI](https://platform.openai.com/docs/guides/tools), their standard set of tools includes web search, calling to remote MCPs, file search, image generation, code interpreter, and more.
In summary, tools are a critical component of agentic systems that allow large and small language models to have access to more information and functionality that may not be available in the model directly.
### Why is Pay-Per-Tool the next logical step?
As AI continues to grow in both complexity and usage, a major question is the cost of services and information. It's no secret that the cost of operations for artificial intelligence is enormous, with a significant amount of subsidization and free usage being offered by many of the top companies.
However, as the technology continues to mature and become more mainstream, specifically mainstream for agentic use cases beyond prompt-based LLMs like ChatGPT and Claude, there is a need for agents to have access to more functionality and information—but at whose expense?
Cloudflare introduced [pay per crawl](https://blog.cloudflare.com/introducing-pay-per-crawl) to address the changing landscape of consumption. While the motives were slightly different, the end goal is the same: to allow compensation for access to resources.
The next logical step is to explore a high-level view of how pay-per-tool covers a number of tools and how it can be used to create a more dynamic and scalable agentic system.
### How does Pay-Per-Tool work?
Pay-per-tool is a concept or view of how a simple flow of an agent calling a tool pays for the resource and the provider of the resource receives compensation.
Using x402, tools can be paid for per use. This means that the user only pays for the tool they use rather than signing up for hundreds of subscriptions. Additionally, this allows tooling providers to properly pass on the costs as the internet changes and pay-per-crawl becomes a reality.

More specifically, pay-per-tool is already being explored through a number of open protocols, including the [Agent Payment Protocol](https://ap2-protocol.org) from Google, [x402](https://x402.org) from Coinbase, and [Agentic Commerce Protocol](https://www.agenticcommerce.dev) from OpenAI and Stripe.
### x402: The Default Solution for Pay-Per-Tool
x402 is the internet-native payments protocol from Coinbase that is designed to work within the traditional internet by utilizing the mostly (previously) dormant HTTP 402 status code for `payment required`.
This allows existing web services like APIs and websites to easily adopt the protocol and start charging for access. We have already seen x402 explored in collaboration with other protocols like AP2 from Google and the ERC-8004 trustless agents framework being put forward by the Ethereum Foundation.
The reason I believe x402 is so key and will play such a big role is a combination of the simplicity and extensibility of the protocol itself. The current design has already allowed a number of services providing tools for agents like Firecrawl and Freepik to start enabling agentic access without the need to build a new API or develop a complex payment system.
### Blockchain Scalability and Costs
One of the larger value propositions that x402 brings to the table is the ability to have payments move at the speed of blockchains instead of traditional financial institutions. In reality, even Ethereum with \~12-minute finality is still faster than most traditional credit card payment settlement and ACH/cross-border processing times.
As you start to explore alternative Layer 1 blockchains like Solana, the speed of settlement becomes even more apparent with \~12–15 seconds of finality.
My belief is and always will be that the fastest will never be good enough and will always be too slow. As more entities begin utilizing blockchain infrastructure for their operations, the speed of settlement will only continue to decrease.
Based on the above, the best blockchain in the world today for real-time payments, especially for agentic micropayments, is [SKALE](https://skale.space).
With instant single-slot finality, consensus is completed in just a single block that is processed and executed in around one second on the current architecture. This can be sped up by improving consensus, resizing chains, changing node location (see Hyperliquid), and even improving hardware of the nodes.
However, even if all of that were to be done (see Solana), the speed of settlement is still not enough, with the costs of agentic tools needing to be valued to the tune of billions of requests per day.
The words "it's cheap enough" have been spoken by teams building on blockchain for the last 5+ years and have so far proven true. Why? Outside of the occasional spike, the costs of operations have been consistently good enough for current use cases to the tune of hundreds of transactions per second.
#### A Scalability Scenario
Cloudflare, as of 2023, was serving 46 million HTTP requests per second according to [Alex Bocharov's blog post](https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare/).
Breaking this down, let's assume 0.01% of these requests are agents searching the web. This would equate to 4,000 requests per second—fully capable of being handled by a single SKALE Chain.
Using an arbitrary cost of 70,000 gas units per micropayment, the expected block gas limit would need to be at least 280,000,000 gas units per block (just to cover the consumption of the micropayments) excluding any other usage. SKALE, having one of the largest block gas limits at \~268,000,000 gas units per block, would be able to handle this with ease just by bumping the gas limit slightly.
Most EVM chains tend to limit the gas limit to \~30–50 million gas units per block, as they lack the capacity to handle this amount of compute.
Additionally, with most blockchains being designed to increase the base gas fee per block as the demand for compute increases from block space being filled, the cost of operations would continue to spike and stay elevated on most blockchains under this type of load.
Even Solana, with local fee markets, would see elevated costs due to constant interaction with specific accounts.
SKALE, on the other hand, operates more like a decentralized cloud provider—offering pre-paid compute resources and the ability to quickly add more compute to handle spikes in demand. SKALE is capable of handling the above scenario with zero spikes in cost due to the pre-paid nature of the network.
This means that while other networks would come to a standstill and see massive volatility and potential stability issues, SKALE would be able to handle the needed load to serve a significant portion of the onchain settlement of micropayments via x402 for agents on just a single SKALE Chain.
Additionally, SKALE is almost able to add an infinite amount of SKALE Chains. This means that as more agents come online, more chains can be procured to handle the ever-increasing demand. Forty million requests per second could be handled by 4,000 sChains, each serving 10,000 requests per second.
With thousands or even tens of thousands of servers in data centers and locations from many providers scattered all over the world, this is an incredibly feasible scenario for SKALE to be positioned to handle for the agentic world.
#### Reducing Costs to Increase Demand
One of the biggest challenges facing the adoption of pay-per-tool is the cost of operations combined with onchain costs and limits. With gas fees on chains like Solana and Base—both of which are two of the more popular x402 chains today—being around \~$0.001 for a single transaction, that is close to 15% of the actual cost of the payment being around $0.01, which is the most common cost for an x402 payment today.
This is a major barrier to entry for many providers and users alike. The cost of operations is too high, and the onchain costs are too high.
With the reduction or removal of gas fees, the cost of operations can be scaled down significantly. This is especially relevant for many x402-enabled endpoints that are currently being used to serve basic content like images, text, price feeds, and more that can arguably cost a fraction of a penny to serve.
By scaling the costs down, agents can now afford to do more without the fear of running out of funds or being throttled by the human-set budget.
With zero gas fees and instant finality, the cost of operations can be scaled down to literally one wei on a SKALE Chain in any token, with far more complex settlement and management contracts.
### Conclusion
The future of agentic systems is looking bright, but the cost of operations and the ability to scale is a major concern. The vision of pay-per-tool is key to ensuring that the future of agentic systems is capable of scaling to meet the needs of the growing market while being built in a way that is sustainable for providers and users alike.
If you are interested in deploying your own MCP server, agent, or resources for the agentic world on SKALE, reach out to me with the information below.
import Footer from '../../snippets/_footer.mdx'
## Vercel AI TypeScript SDK: Privacy Middleware
::authors
### The Problem: PII Leaks Through Abstractions
Your AI application uses the Vercel AI SDK. It's clean — `generateText`, `streamText`, a few lines of code, done. But every user message flows through your system intact:
> "I'm Alice Smith from Acme Corp. My email is `alice@example.com` and my phone is `555-0123`."
That string hits your:
* **Logs** — Sentry, Datadog, whatever you're using
* **Context windows** — passed to the LLM, stored in conversation history
* **Tool calls** — maybe forwarded to a CRM or booking API
* **Error traces** — when something breaks, the full message is in the stack
One misconfigured logger, one debug endpoint left open, one analytics pipeline you forgot about — and you've got a data breach waiting to happen.
In production AI systems, **PII isn't a compliance checkbox — it's a systemic vulnerability that travels through every abstraction layer.**
### The Solution: Middleware That Redacts
[ai-sdk-privacy-filter-middleware](https://github.com/TheGreatAxios/ai-sdk-privacy-filter-middleware) is a Vercel AI SDK [\[1\]](#sources) middleware that wraps any language model and automatically redacts PII from messages. It uses OpenAI's [privacy-filter](https://huggingface.co/openai/privacy-filter) model [\[2\]](#sources) running locally via [Transformers.js](https://huggingface.co/docs/transformers.js). [\[3\]](#sources)
No API calls. No cloud dependency. The 1.5B parameter model runs in your process — browser or Node.js.
#### How It Works
> **Model Architecture Note:** The privacy-filter model uses a sparse Mixture-of-Experts (MoE) architecture with 128 experts and top-4 routing per token. [\[3\]](#sources) While the full model is \~1.5GB, only \~50M parameters are active per forward pass — inference is fast once loaded.
Wrap your model with the middleware:
```typescript
import { wrapLanguageModel } from 'ai';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { privacyFilterMiddleware } from 'ai-sdk-privacy-filter-middleware';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const model = wrapLanguageModel({
model: openrouter('openrouter/free'),
middleware: privacyFilterMiddleware(),
});
```
Now every `generateText` or `streamText` call automatically:
1. **Detects** PII in user/system messages
2. **Replaces** it with typed placeholders (`[PERSON_1]`, `[EMAIL_1]`)
3. **Restores** original values in the LLM's response
```
User sends: "I'm Alice, my email is alice@corp.com"
LLM sees: "I'm [PERSON_1], my email is [EMAIL_1]"
LLM replies: "Hello [PERSON_1], I'll contact [EMAIL_1]"
User sees: "Hello Alice, I'll contact alice@corp.com"
```
The LLM never sees the real PII. Your logs only contain placeholders. Compliance becomes architectural.
### What It Detects
The model catches 8 categories of PII:
| Type | Placeholder | Example |
| ----------------- | ------------- | ------------------------- |
| `private_person` | `[PERSON_N]` | Alice Smith |
| `private_email` | `[EMAIL_N]` | `` `alice@example.com` `` |
| `private_phone` | `[PHONE_N]` | `+1-555-0123` |
| `private_address` | `[ADDRESS_N]` | 123 Main St |
| `private_date` | `[DATE_N]` | 1990-01-15 |
| `private_url` | `[URL_N]` | `https://example.com` |
| `account_number` | `[ACCOUNT_N]` | 1234-5678-9012 |
| `secret` | `[SECRET_N]` | API keys, passwords |
### The Middleware Pattern
The Vercel AI SDK's middleware system is the right abstraction for this. It sits between your code and the LLM — exactly where redaction belongs.
```
Your Code ──► transformParams (detect + redact PII)
│
▼
LLM sees redacted text
│
▼
LLM response (with placeholders)
│
▼
wrapGenerate/wrapStream (unredact placeholders)
│
▼
User sees original PII restored
```
The middleware implements three hooks:
* **`transformParams`** — redacts user and system messages before they reach the LLM
* **`wrapGenerate`** — unredacts placeholders in non-streaming responses
* **`wrapStream`** — unredacts placeholders in streaming responses via a transform stream
Only user and system messages are redacted. Assistant and tool messages pass through untouched — the LLM never outputs real PII, only placeholders that get restored.
### Why Local Inference Matters
You could send messages to a cloud PII detection API. Many services offer this. But now you have:
* **Network latency** — every request adds 100-500ms
* **Data residency issues** — user data leaves your infrastructure
* **Rate limits and costs** — per-request pricing adds up fast
* **Availability dependency** — their downtime is your downtime
Running the model locally via Transformers.js eliminates all of this. The model loads once (\~1.5GB), then inference runs in milliseconds on CPU (WASM) or GPU (WebGPU).
#### Runtime Support
| Runtime | Device | Status |
| ------- | ------ | -------------------- |
| Node.js | WASM | Supported |
| Browser | WebGPU | Supported |
| Browser | WASM | Supported (fallback) |
Auto-detects the best available device. WebGPU when available, WASM fallback otherwise.
> **Warning:** WebGPU does not work in Safari. Use WASM fallback or deploy server-side for Safari users.
### Configuration
```typescript
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const model = wrapLanguageModel({
model: openrouter('openrouter/free'),
middleware: privacyFilterMiddleware({
// Only redact specific entity types (default: all)
entityTypes: ['private_person', 'private_email', 'private_phone'],
// Minimum confidence score (default: 0.8)
minScore: 0.9,
// Whether to unredact LLM responses (default: true)
redactResponses: true,
// Custom placeholder format
placeholderFormat: (type, n) => `<<${type}_${n}>>`,
// Device override ('wasm' | 'webgpu')
device: 'webgpu',
// Model loading progress callback
onProgress: ({ status, progress }) => {
console.log(status, progress);
},
}),
});
```
### Eager Initialization
The model loads lazily on first use. First request will be slower (\~5-15s depending on hardware) while the model downloads and initializes. To pre-load:
```typescript
import { createPrivacyFilter } from 'ai-sdk-privacy-filter-middleware';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const middleware = await createPrivacyFilter({ device: 'webgpu' });
const model = wrapLanguageModel({
model: openrouter('openrouter/free'),
middleware,
});
```
### Multi-Provider Support
Works with any AI SDK provider:
```typescript
// OpenRouter (default example)
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
// OpenAI
import { openai } from '@ai-sdk/openai';
// Anthropic, Gemini, Cohere, etc.
// Any provider that returns the standard AI SDK interface
```
The middleware is provider-agnostic — it operates on messages, not model implementations.
### Browser vs Server: Where to Run It
The middleware works in both environments, but the trade-offs differ.
> **Architecture Note:** The privacy-filter model is a sparse Mixture-of-Experts (MoE) with 128 experts. [\[2\]](#sources) Only \~50M parameters activate per token, making inference lightweight despite the \~1.5GB download size.
| Environment | Device | Model Size | Best For |
| ----------- | ------------------------- | ----------------------------------- | --------------------------------------------------------- |
| **Browser** | WebGPU (or WASM fallback) | Downloaded per user | Client-side apps, data never leaves the device |
| **Server** | WASM | Loaded once, shared across requests | API routes, guaranteed availability, no per-user download |
> **Warning:** WebGPU is not supported in Safari. Use WASM fallback or server-side deployment for Safari users.
#### Browser (WebGPU)
Running in the browser means:
* **Zero server cost** — model inference happens on the user's GPU
* **Maximum privacy** — PII never leaves the device, not even to your server
* **Per-user download** — every user downloads \~1.5GB on first use
* **Hardware variance** — WebGPU support varies; WASM fallback is slower
Ideal for: Chat UIs, client-only apps, compliance requirements that demand on-device processing.
#### Server (Node.js + WASM)
Running on the server means:
* **Predictable performance** — consistent hardware, no WebGPU variability
* **No per-user overhead** — model loads once, serves all requests
* **Standard deployment** — works on any Node.js host (Vercel, Railway, etc.)
* **Slightly slower inference** — WASM vs WebGPU, but still milliseconds
Ideal for: API routes, serverless functions, applications where you control the infrastructure.
Both work. Browser for privacy guarantees; server for operational consistency.
### Get Started
**Repository:** [github.com/TheGreatAxios/ai-sdk-privacy-filter-middleware](https://github.com/TheGreatAxios/ai-sdk-privacy-filter-middleware)
**Install:**
```bash
npm install ai-sdk-privacy-filter-middleware
```
**Quick start:**
```typescript
import { generateText, wrapLanguageModel } from 'ai';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { privacyFilterMiddleware } from 'ai-sdk-privacy-filter-middleware';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const model = wrapLanguageModel({
model: openrouter('openrouter/free'),
middleware: privacyFilterMiddleware(),
});
const result = await generateText({
model,
prompt: "My name is Alice Smith and my email is alice@example.com",
});
// PII is redacted before reaching the LLM
// Response placeholders are restored before you see them
console.log(result.text);
```
### Validating Redaction with Debug Logging
To verify the middleware is working correctly, use the debug callbacks to inspect what's happening at each stage:
```typescript
import { generateText, wrapLanguageModel } from 'ai';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { privacyFilterMiddleware } from 'ai-sdk-privacy-filter-middleware';
const openrouter = createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY });
const model = wrapLanguageModel({
model: openrouter('openrouter/free'),
middleware: privacyFilterMiddleware({
entityTypes: ['private_person', 'private_email'],
minScore: 0.8,
redactResponses: true,
placeholderFormat: (type, n) => `<<${type}_${n}>>`,
onRedact: ({ original, redacted, entities }) => {
console.log('[onRedact] original:', original);
console.log('[onRedact] redacted:', redacted);
console.log('[onRedact] entities:', JSON.stringify(entities, null, 2));
},
onUnredact: ({ raw, unredacted }) => {
console.log('[onUnredact] raw:', raw.slice(0, 200));
console.log('[onUnredact] unredacted:', unredacted.slice(0, 200));
},
}),
});
const result = await generateText({
model,
prompt: "My name is Alice Smith and my email is alice@example.com",
});
console.log('=== FINAL OUTPUT ===');
console.log(result.text);
```
**Example output:**
```
[onRedact] original: My name is Alice Smith and my email is alice@example.com
[onRedact] redacted: My name is <> and my email is <>
[onRedact] entities: [
{
"entity_group": "private_person",
"score": 0.9999969899654388,
"word": " Alice Smith"
},
{
"entity_group": "private_email",
"score": 0.9995789726575216,
"word": " alice@example.com"
}
]
[onUnredact] raw: Hello <>, I've noted your email <>
[onUnredact] unredacted: Hello Alice Smith, I've noted your email alice@example.com
=== FINAL OUTPUT ===
Hello Alice Smith, I've noted your email alice@example.com
```
This confirms the LLM only saw placeholders, never the real PII. The `onRedact` callback shows exactly what entities were detected and their confidence scores, while `onUnredact` demonstrates the placeholder-to-original mapping being restored.
### When to Use This
**Use browser deployment when:**
* You're building a client-side chat interface
* PII must never reach your server (maximum privacy)
* Users have modern browsers with WebGPU support
* You can tolerate the per-user model download
**Use server deployment when:**
* You're building API routes or serverless functions
* You need predictable, consistent performance
* You want to avoid pushing 1.5GB to every user
* You're already running Node.js infrastructure
**Don't use this when:**
* You need centralized redaction across multiple languages — use [privacy-python-server](https://github.com/thegreataxios/privacy-python-server) instead
* You can't afford 1.5GB memory per instance (browser or server)
***
Sources
1. Vercel, "AI SDK" documentation. [https://ai-sdk.dev](https://ai-sdk.dev)
2. OpenRouter, "OpenRouter AI SDK Provider" documentation. [https://openrouter.ai/docs](https://openrouter.ai/docs)
3. OpenAI, "privacy-filter" model card, HuggingFace. [https://huggingface.co/openai/privacy-filter](https://huggingface.co/openai/privacy-filter)
4. HuggingFace, "Transformers.js" documentation. [https://huggingface.co/docs/transformers.js](https://huggingface.co/docs/transformers.js)
import Footer from '../../snippets/_footer.mdx'
## x402 via EIP-3009 Forwarding
x402 is Coinbase's open protocol for internet-native payments that enables seamless blockchain transactions using ERC-3009 USDC. This article explores how to extend x402 to any blockchain through EIP-3009 forwarding contracts, with a focus on SKALE Network's zero-gas infrastructure that enables instant, cost-free transactions across any token on EVM-compatible chains.
x402 is an open protocol for internet native payments created at Coinbase. Coinbase has been deeply aligned with Circle, and chose to build x402 process around the the use of ERC-3009 native USDC.
With an increasing number of blockchains being created, many of which are utilizing bridged assets instead of official deployments from stablecoin issuers, this article explores ERC-3009, forwarding payments with ERC-3009 Forwarder, implementing with minimal facilitator changes, and more.
While enables x402 on any blockchain, a major unlock of this initiative was to bring x402 to SKALE, the only network of infinite block space with zero gas fees and instant finality.
### Technology Overview
The following section provides an introduction to x402, blockchains, SKALE Network, stablecoins and tokenization, ERC-3009, and the ERC-3009 Forwarder. If you are already familiar with these topics and want to skip to the implementation, click [here](#implementation).
#### x402
[x402](https://x402.org) by Coinbase is an open protocol for internet native payments. This allows for access to be given to resources in a new manner; without the need for traditional login/registration, OAuth, or complex registration. It also bakes monetization directly into the flow for resource access.
Specifically, x402 is blockchain agnostic and runs at the speed of blockchain. The faster the blockchain, the faster your payment. While designed for digital dollars, the standard is technically agnostic to allow for any token to be used as payment.
Additionally, it was built with a number of key use cases in mind such as agentic payments to allow for real time API access to resources, micro transactions for access and creation of content, and more broadly a native integration into any web based service allowing for monetization to occur without a middleman.
#### SKALE
[SKALE](https://skale.space) is a network of blockchains capable of bringing an infinite amount of block space to the world in the form of SKALE Chains. SKALE Chains are EVM compatible blockchains that are run by the pool of SKALE validator nodes. With a unique economic model that allows communities, developers, businesses, enterprises, and governments to purchase their own blockchains for a flat fee, the underlying SKALE Chain can be configured and utilized in whatever way they see fit, including zero gas fee transactions.
With the instant finality, large block size, and zero gas fee nature of a SKALE Chain; it's an ideal fit to bring zero cost operations to x402 for every token; a far greater opportunity for developers and stablecoin issuers compared to using subsidized USDC on certain chains.
#### Stablecoins
If you are unfamiliar with stablecoins, they are a cryptocurrency designed to remain at a stable value in relation to some other asset. The largest stablecoins today are Tether USD (USDT) and Circle USD (USDC) which are tokens issued by their respective companies intended to stay pegged at the value of $1 USD per token.
The usage of stablecoins within x402 makes a lot of sense for many of the core use cases called out such as for cloud compute and storage providers. Stablecoins with zero gas fees is an even stronger pull to providers who don't have to weigh the cost of gas into their services.
#### ERC-3009 & Forwarding Contract
ERC-3009 allows the transfer of fungible assets through signed authorization. Through the use of meta-transactions, signatures are used to approve the movement of assets. This brings with it many unique benefits which are explored in the official [EIP-3009](https://eips.ethereum.org/EIPS/eip-3009).
The unique part about 3009, is that it's actually implemented within a few stablecoins, such as USDC and EURC by Circle, but very few others. While this limits blockchains and tokens without ERC-3009 native tokens, it does not stop us from moving forward.
While there are a number of ways to implement the meta-transactions, for to start I chose to go with a **Forwarding** contract for ERC-3009 since that is what the majority of facilitators are currently offering. My belief is that when a technology is new we can always explore more complex and fine-tuned designs, however, the easier it is to integrate into existing tooling the faster we can bring the usage over to SKALE and everyone can benefit from the zero gas fees.
> The current forwarding contract is not audited. This contract is offered without any guarantees or warranties under the MIT License. See the full license [here](https://github.com/TheGreatAxios/eip3009-wrapper/blob/main/LICENSE).
### Implementation
The following section explores all the code written and steps taken to achieve a working implementation.
#### Forwarding Payments with ERC-3009
The entire setup of facilitators currently relies on ERC-3009 compatible tokens, i.e USDC.
Therefore, to ensure that we could utilize as much of the existing facilitator as possible we needed to implement a `Forwarding` contract. While slightly more inefficient, the gas costs are irrelevant on a chain like SKALE and the extra approval is a price worth paying to achieve my goal (I can iron it out later).
I created the [EIP-3009 Forwarder](https://github.com/TheGreatAxios/eip3009-forwarder) which uses the core structure of the core proposal, but through a forwarding contract. *The difference?* The sender who is signing the authorization must first approve the token being spent via authorization.
As mentioned above, I think this is an acceptable tradeoff. This specific forwarding contract is designed to support a single token only. This was again done to mimic the setup and flows of a traditional facilitator.
#### x402 Example Scripts
Once the forwarding contract was built with a small test suite, it made sense to ensure this worked directly. I then setup a `Bun` mono-repo [here](https://github.com/TheGreatAxios/x402-examples) which has a config folder and an ethers-v6 example.
Make note of the first key step within the script below, which is the approval checks and approve as needed functionality.
```typescript
// ===================
// STEP A: User approves ERC-20 (with smart allowance check)
// ===================
console.log("Step A: Checking and setting allowance");
const approveAmount = ethers.parseUnits("1", token.decimals); // 1 token
const currentAllowance = await erc20Contract.allowance(userAddress, FORWARDER_ADDRESS);
const minimumRequired = (approveAmount * 20n) / 100n; // 20% of approve amount
console.log(`Current allowance: ${ethers.formatUnits(currentAllowance, token.decimals)}`);
console.log(`Minimum required (20%): ${ethers.formatUnits(minimumRequired, token.decimals)}`);
if (currentAllowance < minimumRequired) {
console.log("Insufficient allowance, approving...");
const approveTx = await erc20Contract.approve(FORWARDER_ADDRESS, approveAmount);
await approveTx.wait();
console.log(`✓ Approved! Tx: ${approveTx.hash}`);
} else {
console.log("✓ Sufficient allowance exists, skipping approval");
}
```
This code ensures some amount is approved to the forwarder in advance which can then be used for micro-transactions.
> I think in these cases it's an acceptable flow for agents as they can do a single approval for small batches i.e $10 of $0.01 gets you 1000 transactions.
You can deploy the forwarding contract and play with these scripts directly. The example script working to execute a USDC (without ERC-3009) payment on SKALE Europa Testnet can be seen here:
```shell
Step A: Checking and setting allowance
Current allowance: 999999999999.78
Minimum required (20%): 0.2
✓ Sufficient allowance exists, skipping approval
User balance: 49.78 USDC
Step B: User signs authorization for 0.01 token transfer
Nonce: 0x94c1a3e2f911070928b2d1cee1c31736decabe7ac044b020f4b808806b58eb8b
Valid from: 2025-10-04T05:34:14.000Z
Valid until: 2025-10-04T06:34:14.000Z
Nonce already used: false
Domain separator: 0x056b9108f4b1e6aca877b44e3afa782d7a46328ecb25ee6d4eb037c02cfeaaa0
Domain: {
name: "USDC Forwarder",
version: "1",
chainId: 1444673419,
verifyingContract: "0x7779B0d1766e6305E5f8081E3C0CDF58FcA24330",
}
Authorization value: {
from: "0xdEAC50014a531969d76D9236d209912F4B1AacDB",
to: "0xD1A64e20e93E088979631061CACa74E08B3c0f55",
value: "0.01 (10000)",
validAfter: 1759556054,
validBefore: 1759559654,
nonce: "0x94c1a3e2f911070928b2d1cee1c31736decabe7ac044b020f4b808806b58eb8b",
}
✓ User signed authorization: 0xa8e092aea8b4b0001d...
Signature components: v=28, r=0xa8e092aea8b4b0001dd9e1f72c718855e0ff0b91668dd9d92ba3df474b051370, s=0x3f1380d002c243e432f2292aaed3bac5cb64b1f4a494e128d6e3fa33e722cfa7
Step C: Relayer executes the transfer (pays gas)
Final allowance check: 999999999999.78
✓ Transfer executed! Tx: 0x6e0da427fa6976cbc3100f155c77113fc2508249fcb042763baeb3af264370da
✓ Gas paid by relayer: 92297 units
✓ 0.01 tokens transferred from 0xdEAC50014a531969d76D9236d209912F4B1AacDB to 0xD1A64e20e93E088979631061CACa74E08B3c0f55
🎉 Gasless transfer complete!
- User paid 0 gas for the transfer
- Relayer paid the gas fees
- Transfer executed via signed authorization
```
#### Minimal Facilitator Modifications
The facilitator is an optional service within the x402 flow that simplifies the process of verifying and settling payments. While optional, it can help accelerate the adoption and addition of x402 into applications that don't have the experience or the resources to build the necessary functionality.
The two key functions that a facilitator offers are:
1. Verification of payment payloads submitted by clients (buyers).
2. Settling of payments on the blockchain on behalf of the servers
This enables any web server to utilize the blockchain to handle payments and settlement without the need of direct connection to the blockchain within their existing servers.
The majority of facilitators (and x402 payments) have been focused on Base so far. While Coinbase/Base may be subsidizing all gas fees for USDC transactions making x402 cheap, there are no guarantees that lasts forever OR benefits toward non-USDC stablecoins and tokens.
With the core protocol being built around the usage of USDC to start, the facilitators are utilizing an Ethereum Improvement Proposal (EIP) labeled EIP-3009 which allows transfer with authorization; further extending EIP-712 signatures and meta-transactions directly within the token contract.
However, as not all blockchains are able to attain a native deployment by Circle and with many existing stablecoins like USDT not being natively EIP-3009 compatible, I set out to ensure that that facilitators could work with any token with minimal modifications.
The proposed changes include using an [`EIP-3009 Forwarder`](https://github.com/thegreataxios/eip3009-forwarder) smart contract in Solidity which wallets can approve to spend tokens on their behalf. With such a design, it allows any token on any blockchain to be utilized with almost no changes to an EVM facilitator as the current flows remain almost identical.
To prove this, I made a [pull request](https://github.com/faremeter/faremeter/pull/58) to Faremeter by [Corbits](https://corbits.dev) to add support to their facilitator. The majority of changes come from additional configuration as you can see in the following:
```typescript
type NetworkInfo = {
address: Address;
contractName: string;
/* Added Below */
forwarder?: Address;
forwarderName?: string;
forwarderVersion?: string;
};
```
The most complicated changes were needed within the actual EVM facilitator code, of which the original changes were actually pretty light. They simply had to allow the facilitator to defer to a forwarder when present else fallback to the actual ERC-20 with ERC-3009 support.
I wound up refactoring the whole file to better re-use code, but an example here:
```typescript
async function createContractConfig(
useForwarder: boolean,
chainId: number,
forwarderVersion: string | undefined,
forwarderName: string | undefined,
forwarderAddress: `0x${string}` | undefined,
publicClient: PublicClient,
asset: `0x${string}`,
contractName: string,
): Promise {
const address = getContractAddress(useForwarder, forwarderAddress, asset);
const domain = useForwarder
? generateForwarderDomain(chainId, {
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
version: forwarderVersion!,
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
name: forwarderName!,
// eslint-disable-next-line @typescript-eslint/no-non-null-assertion
verifyingContract: forwarderAddress!,
})
: await generateDomain(publicClient, chainId, asset);
// Validate contract name for non-forwarder cases
if (!useForwarder && domain.name !== contractName) {
throw new Error(
`On chain contract name (${domain.name}) doesn't match configured asset name (${contractName})`,
);
}
return { address, domain };
}
```
from [here](https://github.com/faremeter/faremeter/blob/76f2e79ee2906ae4e60330186c350bfd31e520a1/packages/payment-evm/src/exact/facilitator.ts#L81) showcases the dynamic use of `useForwarder`.
When the facilitator is called, it uses the incoming configuration of token and chain to validate if the forwarder is needed. After which the core facilitation stays 1:1 as the actual EIP-712 signature validation and then meta-transaction execution remains identical.
#### Why x402 on SKALE
> This section is opinionated.
**"Why SKALE?"** is a question that I have been getting asked for over 4.5 years now (as of 10/1/2025). I think as a developer you find your preferred tech stacks and for everyone it's a bit different. SKALE however really is unique. The combination of performance, stability, innovation, and feature set is unmatched across the Web3 space.
In the case of x402 -- there is quite literally no network better suited to dominate. I've been asking developers building what they value most with x402. The answer is always one of two things:
1. The cheapest costs possible (i.e gas fees) which allows facilitators to reduce their opex and not have to pass it onto buyers as service fees
2. Speed. Speed. Speed. They want the chain to be fast and they are prioritizing real finality when possible (i.e Solana > Base).
If you were unaware:
1. SKALE Chains have zero gas fees
> This doesn't mean that SKALE doesn't make money. SKALE Chains are pre-paid monthly by application and chain owners. No different than many of the most successful cloud models in the world like Amazon Web Services or Google Cloud
2. Instant Finality
> Once a transaction is posted, the block and transactions cannot be reversed. The fork-less nature of a SKALE Chain means that current chains which operate around 1-2s block times are faster most L1s and retain better finality with lower risk. Additionally, smaller SKALE Chains with co-located nodes (think Hyperliquid style) could reduce this down to potentially a fraction of the time with instant finality.
Additionally, the last thing is scalability. While some blockchains today may have the capacity for handling a few thousand transactions per second or peaks of higher; the whole world will never run on a single blockchain (for many reasons).
SKALE also makes it possible to run an infinite amount of blockchains for x402, payments, stablecoins, and the broader onchain finance landscape as it continues to grow.
### Conclusion
I think x402 is one of many recent protocols that is incredibly exciting for the future of the machine economy. I previously wrote [The Rise of the Machine Economy](/blog/the-rise-of-the-machine-economy) which outlined my thoughts about how agentic payments will grow.
As onchain payments are still in their infancy, the growth potential here is massive. While Turing-complete blockchains enable programmable payments; the natural integration within the broader internet makes x402 a potential catalyst to bring many businesses onchain.
With this potential growth, the only network that is capable of scaling to handle an infinite amount of payments (of any size, including sub-cent) is SKALE. Based on this, I think that a SKALE Chain (of variable sizing) will become a default part for businesses looking to access x402.
***
import Footer from '../../snippets/_footer.mdx'