Skip to main content
Version: 4.6.1-saas

Simulating NetFUNNEL queues with k6

This guide explains how to generate load directly against NetFUNNEL using k6 to reproduce and measure sequential entry, queueing, TTL cycles, and completion. Because the scripts mimic real client calls, you can safely validate configuration effects and user experience.

Test scenarios:

  • Basic Control: nfStart → (simulate work) → nfStop
  • Section Control: nfStartSection → (Alive Notice 5003 loop) → nfStopSection

Why use this

  • Validate sequential entry and queues: intentionally create waiting conditions to verify that users enter in order and that TTL cycles behave as configured.
  • Observe the waiting room UX: while k6 generates load, open your browser and navigate to a page guarded by the same segment to visually confirm the waiting room and subsequent entry.
  • Dry-run segment configurations: try different inflow rates, entry statuses, and timing to confirm real effects without touching your backend.
  • Capacity and SLO checks: measure opcode latencies (5101, 5002, 5003, 5004) and confirm they meet targets under expected VU levels.
  • Integration testing without risk: exercise only NetFUNNEL until you’re ready to introduce backend load.
  • Incident rehearsal: simulate spikes and queue buildup to verify alerting and dashboards.

Quick way to observe the waiting room:

  1. In the NetFUNNEL console, lower inflow (e.g., TPS) or set entry status to force queueing for your segment.
  2. Run the k6 script for that segment (basic or section control) to create pressure.
  3. In a browser, access a page protected by the same segment; you should see the waiting room, followed by entry when allowed.

What these scripts do (and don’t)

  • Generate load to NetFUNNEL only. They do not call your service logic (Web/WAS/DB). You can add custom calls if needed.
  • Fetch live configuration, request entry keys, optionally wait and retry, then complete (return key).
  • Measure timings per opcode as k6 Trends for quick analysis.

If you need to also stress downstream services, insert your service calls between successful entry (200) and key return (5004); see the comment in the sample below.

Why this sim equals real users (opcodes explained)

Your k6 virtual users send the same NetFUNNEL opcodes a real client (Agent/SDK) would. That’s why the behavior you see (queues, pacing, admission, completion) matches end-user experience.

  • 5101 — Entry request (issue key)

    • End user view: “Let me in.” If capacity exists, user is admitted; otherwise, they are placed in a queue.
    • Script action: GET 5101; parse response into status (200/201) and key/sticky/ttl.
  • 5002 — Queue poll (wait/retry)

    • End user view: “I’m waiting; check again later.” The client waits ttl seconds and asks again.
    • Script action: sleep(ttl), GET 5002; loop until 200.
  • 5003 — Alive Notice (Section Control only)

    • End user view: “I’m still using the protected section.” Keeps the session alive while inside the section.
    • Script action: repeat (sleep ttl → GET 5003) section_count times.
  • 5004 — Completion (return key)

    • End user view: “I’m done.” Frees capacity for the next user.
    • Script action: after simulated service duration (btn_click_delay), GET 5004.

Sequence from an end-user perspective (Basic vs Section):

Prerequisites

  • A reachable NetFUNNEL endpoint (cloud or on‑prem)
  • Project ID (sid) and Segment ID (aid) for
    • Basic Control segment, and/or
    • Section Control segment
  • A Linux/macOS/WSL environment with k6 installed

Install k6

sudo gpg -k || true
sudo gpg --no-default-keyring --keyring /usr/share/keyrings/k6-archive-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update && sudo apt-get install -y k6

Scripts provided

Two ready-to-run scripts are provided. See the child pages for full copy/paste code and usage:

For an extended walkthrough, see the separate how-to guide.

Configuration and setup

Most settings are common to both scripts; Section Control specific fields are noted.

At the top of each script, adjust virtual users and duration:

export const options = {
vus: 30,
duration: "600s",
// iterations: 3000, // optional cap
};

In setup(), set your environment (common):

vars["apiurl"] = "https://your-netfunnel-domain"; // NetFUNNEL base URL
vars["sid"] = "service_1"; // Project ID
vars["aid"] = "basic_control"; // Segment ID
vars["btn_click_delay"] = "2.5"; // Simulated service duration (seconds)
// Section Control only:
// vars["section_count"] = "10"; // Alive notice cycles

k6 options semantics (VUs, duration, iterations)

  • vus

    • What: Number of concurrent virtual users (k6 workers) executing the default function in parallel.
    • Effect: Higher vus increases concurrent requests and likelihood of queue formation (201), stressing 5002/5003 loops.
  • duration

    • What: Wall-clock time to keep VUs running the default function loop.
    • Effect: Longer duration sustains pressure; more 5101/5002/5003/5004 cycles occur across VUs.
  • iterations (optional)

    • What: Caps the total number of iterations (default function executions) across all VUs.
    • Effect: When set, test may finish before duration once the iteration cap is met; without it, VUs loop until duration elapses.
  • Interaction

    • If only vus + duration are set: open-ended looping for the given time window.
    • If iterations is also set: whichever condition is satisfied first ends the run (iteration cap or duration timeout).
    • Per-iteration pacing is determined by server ttl waits and btn_click_delay; there is no fixed request rate unless you design one.

Configuration → flow mapping

  • apiurl

    • What: Base URL for all requests (settings asset, 5101/5002/5003/5004)
    • Where in flow: Every request in both diagrams
  • sid / aid

    • What: Project ID and Segment ID that identify the NetFUNNEL segment to control
    • Where in flow: Included in 5101 (initial entry request)
  • btn_click_delay

    • What: Simulated business processing time WHILE HOLDING the entry key; a client-side hold time before returning the key
    • Where in flow:
      • Basic Control: Only after entry is granted (200) and just before 5004 (complete)
      • Section Control: After the Alive Notice loop (5003 cycles) and just before 5004
    • Not used for: 201 wait cycles; those waits are governed by ttl from the server
    • How to choose: Set to approximate the average time a real user/session would occupy the protected resource (e.g., page processing, short transaction), not a per-request delay
  • section_count (Section Control only)

    • What: Number of Alive Notice (5003) cycles to simulate a session staying inside the controlled section
    • Where in flow: Number of loop iterations for 5003 between entry (200) and completion (5004)
    • Relation to ttl: Each Alive Notice cycle waits for the ttl returned by the server before sending the next 5003

How it works (request flow)

Below diagrams show the exact request sequences.

Basic Control flow

Section Control flow

Concurrent VUs (conceptual)

Impact of VUs/duration/iterations on the flow:

  • More VUs → more simultaneous 5101 requests and potential 201 queues.
  • Longer duration → more retry (5002) and Alive (5003) cycles overall.
  • iterations cap (if set) → limits total flow executions regardless of duration.

Running the tests

# Example filenames (adjust to your saved script names)
k6 run basic-control-script.js
k6 run section-control-script.js

Hands-on scenario:

  1. Console prep: lower Limited Inflow to force queues (keep the segment open, just constrained).
  2. Run: choose VUs >> Limited Inflow (e.g., Limited Inflow≈2 vs VUs=20–30) and a 5–10 minute duration.
  3. Observe: in Monitoring, check key metrics including the number of waiting users and related indicators.
  4. UX check: open a guarded page/app while testing; see waiting room → queue position → admission.

Use these scripts to validate NetFUNNEL behavior under load quickly and repeatably.