Why VM now
The 2026 question every buyer asks — can ChatGPT, Claude, or Copilot reverse this? — has a structural answer: AI assistants pattern-match against transform shapes they’ve seen in training. Static obfuscators with fixed transforms eventually lose. Per-build polymorphism wins because the LLM has no fixed signature to learn.
But polymorphism is still a transform of recognizable JavaScript. A determined human with an execution sandbox and patience can recover the original logic given enough time. For the small set of functions where that recovery cost matters — license validation, anti-tamper checks, proprietary algorithms — we want a stronger answer.
That answer is virtualization. Compile the function to bytecode for a custom VM. Ship the bytecode plus the VM interpreter. The original function’s control flow, identifiers, and structure are gone — what ships is a stream of opcodes that only the VM understands.
How it works, end to end
1. You mark functions for virtualization
Source — you opt in per function
// @virtualize
function calculateLicenseHash(userId, productKey) {
const seed = userId * 37 + productKey.charCodeAt(0);
return 'JSO-' + seed.toString(16).padStart(8, '0').toUpperCase();
}
// regular code (NOT virtualized) - keeps native JS speed
function renderUI(state) {
document.body.textContent = state.label;
}
Only the marked function is virtualized. Everything else compiles through the standard Maximum-mode pipeline (renaming, string encryption, flat control flow). The VM is reserved for the parts where its runtime cost is worth the protection it gives.
2. The compiler emits bytecode + a VM interpreter
At protection time, the marked function is parsed to an AST and compiled to a stream of opcodes for a stack-based VM. Common opcodes: PUSH_LITERAL, LOAD_VAR, BINARY_OP, CALL_METHOD, RETURN. The opcode encoding is regenerated per build — the same logical operation has different byte values in different releases.
What ships in the protected output (illustrative)
// VM interpreter - shape regenerates per build
(function _vm(){
var R = new Array(256), SP = 0;
var BC = _decodeBytecode("...long base64 string...");
var STR = _decodeStrings("...encrypted constant pool...");
while(true) {
switch(BC[PC++]) {
case 0x4a: R[SP-2] = R[SP-2] * R[SP-1]; SP--; break; // mul
case 0x91: R[SP++] = STR[BC[PC++]]; break; // load string
// ... ~40 other opcodes, dispatcher shape randomized per build
}
}
})();
calculateLicenseHash = function(){return _vmCall(0x12);};
3. The protected file at runtime
When your application calls calculateLicenseHash(userId, productKey), control transfers into the VM. The VM dispatches opcodes against its internal register file and stack. The function’s observable behaviour is identical — it returns the same hash for the same inputs — but its structure is unrecoverable from the shipped bundle alone.
How VM mode differs from Maximum mode today
To make the tradeoff concrete, here’s the same function as the website’s earlier walkthrough:
Maximum mode today — per-build polymorphic decoder, encrypted strings, flat control flow
(function(){var _0xa3=_dec(0x4a);
function _0xa4(_p){var _st=0;
while(_st!==-1){switch(_st){
case 0:if(_p===_dec(0x4b))return!1;
_st=1;break;
case 1:return _p[_dec(0x4c)]>
Date[_dec(0x4d)]();
}}}})();
The original control flow shape is preserved (you can still see it’s an if/else returning a comparison) but identifiers, strings, and per-build randomization make it expensive to follow.
Maximum + VM — the same function virtualized
// the function body is gone - replaced by a VM call
calculateLicenseHash = _vmCall(0x12, _vmCtx);
// what _vmCall(0x12) does is encoded in:
// - the opcode stream (~80 bytes for a small function)
// - the VM dispatcher (regenerates per build)
// - the encrypted constant pool
// none of which contain the original variable names,
// the original control flow structure, the original
// arithmetic constants, or the literal 'JSO-' string.
A reverse engineer trying to recover the original logic now has to: (a) understand this build’s VM dispatcher, (b) extract the bytecode and the constant pool, (c) symbolically execute or simulate the VM with synthetic inputs, (d) reconstruct the original control flow from the opcode trace. That’s an order of magnitude harder than reading polymorphic JavaScript.
Cost, and when to use it
VM execution is meaningfully slower than native JavaScript. The exact factor depends on the function: tight inner loops with cheap per-instruction work pay the most; functions dominated by string operations or method calls amortize the dispatcher cost over more native work and pay much less. Per-call cost is the right metric to plan against, not the slowdown ratio.
For a function called once or twice per session (the recommended use pattern), the cost is microseconds and entirely invisible at user-experience level. For a function called per network response in a typical handler, still entirely fine. For a tight inner loop running thousands of times per second, do not virtualize.
The right rule: opt-in per function via the // @virtualize comment. Code is virtualized only where you explicitly mark it. The other 99% of your bundle keeps native JS speed and gets Maximum-mode polymorphic protection.
Good fits: license validation, anti-tamper checks, in-app entitlement gates, watermarking, proprietary scoring/ranking algorithms, fingerprinting routines, key-derivation paths.
Bad fits: rendering loops, animation tick handlers, network parsing hot paths, anything in a per-frame budget. These should stay native JS with Maximum-mode protection.
How VM mode interacts with the anti-LLM design
The same per-build polymorphism that defeats LLM pattern-matching on Maximum-mode output applies to VM mode. Every release regenerates the VM — a model that solved one build's shape has nothing transferable to the next. There is no fixed signature for an LLM to learn from.
The honest limits
Things VM mode does not solve:
- Live execution observation. An attacker can still attach a debugger, set a breakpoint inside the VM, and dump the register file mid-execution. VM mode raises the cost of understanding the code without running it; it doesn’t prevent observation of the running machine. Pair with anti-debug + runtime monitoring for that threat.
- VM-aware deobfuscators. If an attacker invests serious time, they can reverse the VM dispatcher, write a decompiler that converts opcode streams back into JS, and apply it to every release. We make this expensive (per-build dispatcher randomization) but not impossible. The economic argument — per-release cost to attacker exceeds value of recovered code — still applies.
- Async / await inside virtualized functions. The current design supports synchronous functions only. Marking an
async function as virtualized causes the function to be skipped with a warning rather than virtualized. Truly concurrent async inside a VM has hard semantic issues.
Availability
Included in Corporate ($49/mo) and Enterprise ($99/mo) tiers; not in Basic. Free tier is unaffected. See the VM Protection docs for the API contract and how to mark functions for virtualization.