How to use Web Workers to process data in the admin (JS) in WordPress

Contents

Why use Web Workers in the WordPress admin?

Modern admin pages often need to process large datasets, parse uploaded files, run expensive calculations, or do client-side search/sort operations. Doing that on the main thread blocks the UI, causing the admin to freeze or become unresponsive. Web Workers let you run JavaScript on background threads so heavy processing does not block the UI. This tutorial shows how to integrate Web Workers into WordPress admin screens safely and efficiently, including examples for creating workers, sending data, using transferables, chunked processing, worker pools, and communicating with WordPress endpoints (REST or admin-ajax).

What Web Workers can and cannot do (quick summary)

  • Can: Run JS on a separate thread, use fetch/XHR, perform CPU-heavy tasks, use importScripts, work with ArrayBuffer/TypedArray and transfer ownership for zero-copy.
  • Cannot: Access DOM (document/window), use most global objects tied to browsing context, access jQuery automatically, or directly call PHP/WordPress functions. Communication is via postMessage.

Browser support and fallbacks

Most modern browsers support dedicated Web Workers. Always detect availability and provide a synchronous fallback or progressive enhancement. Use worker-based processing for large jobs for small tasks plain main-thread processing is fine.

Where to place worker files in a plugin or theme

Common approaches:

  • Ship a worker.js file in your plugin’s assets/js folder and reference it via plugin_dir_url(__FILE__)
  • Generate an inline worker via a Blob from a string if shipping a separate file is undesirable

Step 1 — Enqueue admin scripts and pass the worker URL and nonce

Enqueue a main admin script that will create and control the worker. Pass the worker URL and a REST nonce (or any other needed value) via localization or inline script. Example plugin PHP for admin page:

 worker_url,
        restNonce => wp_create_nonce( wp_rest ), // for REST requests from worker
        ajaxUrl   => admin_url( admin-ajax.php ),
    ) )

    wp_enqueue_script( myplugin-admin )
}
add_action( admin_enqueue_scripts, myplugin_enqueue_admin_scripts )

Notes

  • Use admin_enqueue_scripts hook to only load in admin.
  • Use wp_localize_script or wp_add_inline_script to pass URLs and nonces workers cannot magically resolve PHP values.

Step 2 — Basic worker creation and lifecycle (JavaScript)

Create a worker from a URL or create an inline worker using a Blob. Always feature-detect.

// File: assets/js/admin-main.js
// This runs in the admin main thread.

(function () {
    // Data passed by PHP
    var workerUrl = window.MyPluginData  window.MyPluginData.workerUrl
    var restNonce = window.MyPluginData  window.MyPluginData.restNonce

    if (!window.Worker) {
        console.warn(Web Workers are not supported in this browser. Falling back to main-thread processing.)
        // provide fallback behavior
        return
    }

    // Option A: instantiate worker from an external file
    var worker = new Worker(workerUrl)

    // Option B: create inline worker via Blob (if you want to avoid shipping worker.js)
    // var code = self.onmessage = function(e){ self.postMessage(e.data) }
    // var blob = new Blob([code], { type: application/javascript })
    // var inlineWorker = new Worker(URL.createObjectURL(blob))

    // Send initial config (e.g., nonce) to worker
    worker.postMessage({ type: init, restNonce: restNonce })

    // Listen for messages from the worker
    worker.onmessage = function (event) {
        var data = event.data
        if (!data  !data.type) return

        switch (data.type) {
            case progress:
                // update progress UI
                console.log(Progress:, data.percent)
                break
            case result:
                // final processed data
                console.log(Result from worker:, data.payload)
                break
            case error:
                console.error(Worker error:, data.message)
                break
        }
    }

    worker.onerror = function (err) {
        console.error(Worker runtime error:, err.message, err)
    }

    // Example: send a large text payload (CSV) to worker
    function processCsvText(csvText) {
        // Convert text to ArrayBuffer for faster transfer (transferable)
        var enc = new TextEncoder()
        var buffer = enc.encode(csvText).buffer
        // Transfer ownership of buffer (zero-copy)
        worker.postMessage({ type: processCsv, buffer: buffer }, [buffer])
        // After transfer, buffer is neutered in main thread
    }

    // Expose for demo
    window.MyPluginWorker = {
        worker: worker,
        processCsvText: processCsvText,
        terminate: function () { worker.terminate() }
    }
})()

Step 3 — Worker code (worker.js)

Worker code runs without DOM. Use self.onmessage and self.postMessage. Use importScripts(…) to include helper libs if needed.

// File: assets/js/worker.js

// Basic initialization holder
var config = {}

// Called when main thread posts a message
self.onmessage = function (e) {
    var data = e.data
    if (!data  !data.type) return

    try {
        switch (data.type) {
            case init:
                config.restNonce = data.restNonce
                break

            case processCsv:
                // data.buffer is an ArrayBuffer transferred from main thread
                handleCsvBuffer(data.buffer)
                break

            // example: accept tasks that include chunked data
            case processChunk:
                processChunk(data.chunk, data.index)
                break

            default:
                // unknown message
                break
        }
    } catch (err) {
        postMessage({ type: error, message: err.message })
    }
}

function handleCsvBuffer(buffer) {
    // Convert buffer to string
    var decoder = new TextDecoder(utf-8)
    var text = decoder.decode(buffer)

    // Example simple CSV parse (do not use for all CSVs here for demo)
    var lines = text.split(/r?n/)
    var total = lines.length
    var results = []
    for (var i = 0 i < total i  ) {
        var line = lines[i]
        // simulate heavy work
        var cols = line.split(,)
        // perform processing
        results.push(cols)
        // Report progress every 1000 rows
        if (i % 1000 === 0) {
            postMessage({ type: progress, percent: Math.round((i / total)  100) })
        }
    }

    // Return result (structured cloning will copy it)
    postMessage({ type: result, payload: { rows: results.length } })
}

// Example to call WordPress REST endpoint from worker
function postToRestEndpoint(endpoint, data) {
    // Use fetch inside worker. Include nonce as header for authentication.
    return fetch(endpoint, {
        method: POST,
        headers: {
            Content-Type: application/json,
            X-WP-Nonce: config.restNonce  
        },
        body: JSON.stringify(data)
    })
    .then(function (res) {
        if (!res.ok) throw new Error(Network response was not ok:    res.status)
        return res.json()
    })
}

Notes on CSV parsing

  • For complex CSVs, use a robust parser (PapaParse, etc.). You can import libs in worker using importScripts(url) or bundle them into the worker file.
  • Use chunked reading for very large files to reduce memory use.

Passing data: structured cloning vs transferables

By default postMessage uses structured cloning, which makes a copy. For large binary data, use Transferable objects (ArrayBuffer, MessagePort, OffscreenCanvas, etc.) to move ownership without copying.

// Main thread: transfer an ArrayBuffer
var arr = new Uint8Array(1024  1024) // 1MB
var buffer = arr.buffer
worker.postMessage({ type: binary, buffer: buffer }, [buffer])
// After transfer, buffer is neutered in main thread.

Chunked file reading (example for large uploads)

When processing large files, slice into chunks and send chunks to the worker for incremental processing. This reduces memory spikes.

// In admin-main.js - reading file in chunks and sending to worker
function processLargeFile(file, chunkSize) {
    chunkSize = chunkSize  2  1024  1024 // 2 MB chunks
    var offset = 0

    function readNextChunk() {
        var end = Math.min(offset   chunkSize, file.size)
        var slice = file.slice(offset, end)
        var reader = new FileReader()
        reader.onload = function (evt) {
            var text = evt.target.result // as text
            // Send chunk to worker (structured cloning)
            worker.postMessage({ type: processChunk, chunk: text, index: offset })
            offset = end
            if (offset < file.size) {
                // throttle to avoid event loop starvation
                setTimeout(readNextChunk, 0)
            } else {
                worker.postMessage({ type: done })
            }
        }
        reader.onerror = function (err) {
            console.error(File read error, err)
        }
        reader.readAsText(slice)
    }
    readNextChunk()
}

Worker pools for parallel processing

If an operation can be parallelized, create a pool of workers (but avoid more workers than CPU cores). A basic pool manager distributes tasks and collects results.

// Simple worker pool
function WorkerPool(workerUrl, size) {
    this.size = size  navigator.hardwareConcurrency  2
    this.workerUrl = workerUrl
    this.workers = []
    this.free = []
    this.queue = []

    for (var i = 0 i < this.size i  ) {
        var w = new Worker(workerUrl)
        w.onmessage = this._onMessage.bind(this, i)
        w.onerror = this._onError.bind(this, i)
        this.workers[i] = { worker: w, busy: false }
        this.free.push(i)
    }
}

WorkerPool.prototype._onMessage = function (index, event) {
    var wobj = this.workers[index]
    wobj.busy = false
    if (this.queue.length) {
        var task = this.queue.shift()
        this._runOnWorker(index, task)
    } else {
        this.free.push(index)
    }
    // forward message to task callback if present
    if (event  event.data  event.data._taskId  this._callbacks[event.data._taskId]) {
        this._callbacks[event.data._taskId](event.data)
        delete this._callbacks[event.data._taskId]
    }
}

WorkerPool.prototype._onError = function (index, err) {
    console.error(Worker pool error, index, err)
}

WorkerPool.prototype._runOnWorker = function (index, task) {
    var wobj = this.workers[index]
    wobj.busy = true
    // tag task for callback mapping
    var id = t   Date.now()   Math.random()
    task.payload._taskId = id
    this._callbacks = this._callbacks  {}
    this._callbacks[id] = task.callback
    wobj.worker.postMessage(task.payload)
}

WorkerPool.prototype.runTask = function (payload, callback) {
    var task = { payload: payload, callback: callback }
    if (this.free.length) {
        var index = this.free.shift()
        this._runOnWorker(index, task)
    } else {
        this.queue.push(task)
    }
}

// Usage:
// var pool = new WorkerPool(MyPluginData.workerUrl, 4)
// pool.runTask({ type: processChunk, chunk: ... }, function(result){ console.log(result) })

Calling WordPress endpoints from a worker

Workers can use fetch to call REST routes. Pass the REST nonce to the worker via init message. Validate the request server-side with current_user_can and check REST nonce.

// PHP: register a REST route that the worker can call
add_action( rest_api_init, function () {
    register_rest_route( myplugin/v1, /process, array(
        methods  => POST,
        callback => myplugin_rest_process,
        permission_callback => function ( request ) {
            return current_user_can( manage_options ) // choose capability
        },
    ) )
} )

function myplugin_rest_process( WP_REST_Request request ) {
    // Validate and sanitize incoming data
    params = request->get_json_params()
    if ( ! is_array( params ) ) {
        return new WP_Error( invalid_data, Invalid data, array( status => 400 ) )
    }

    // Process data (server-side) and return result
    result = array( processed => true, input_items => count( params ) )
    return rest_ensure_response( result )
}

How to call REST route from worker

// inside worker.js (after init where restNonce set)
postToRestEndpoint(/wp-json/myplugin/v1/process, { some: data })
    .then(function (data) {
        postMessage({ type: serverResult, payload: data })
    })
    .catch(function (err) {
        postMessage({ type: error, message: err.message })
    })

Error handling and worker debugging tips

  • Use worker.onerror to catch runtime exceptions.
  • postMessage an error payload from inside try/catch blocks in the worker.
  • Developer tools: most browsers show workers in the DevTools and let you inspect console logs from the worker thread.

Security considerations

  • Never trust data coming from the client or worker always validate on server-side routes (REST or admin-ajax) with nonces and capability checks.
  • Do not embed sensitive secrets into client-side code or pass them to worker code.
  • If you call admin-ajax.php from a worker, pass the nonce or necessary auth headers and check them on the server.

When not to use workers

  • For small DOM manipulations — workers can’t touch DOM.
  • When data must be frequently updated in the UI with low-latency small updates the messaging overhead may offset benefits.

Performance and UX best practices

  • Show progress updates so users know work is running.
  • Use navigator.hardwareConcurrency to choose a sensible pool size, but cap to a safe number.
  • Terminate workers when not needed to free resources.
  • Prefer transferables for large binary payloads.
  • Chunk huge work into manageable pieces to avoid large memory spikes.

Fallback strategies

If workers aren’t available, process in small time-sliced chunks on the main thread using setTimeout or requestIdleCallback to avoid long blocking runs.

// Fallback: time-sliced processing
function processArrayTimeSliced(items, processFn, done) {
    var i = 0
    function slice() {
        var start = Date.now()
        while (i < items.length  (Date.now() - start) < 50) { // 50ms slices
            processFn(items[i], i)
            i  
        }
        if (i < items.length) {
            setTimeout(slice, 0)
        } else {
            done()
        }
    }
    slice()
}

Useful quick reference table

Feature Worker Main thread
DOM access No Yes
Network (fetch/XHR) Yes Yes
Zero-copy transfer Yes (transferables) No
Blocking UI No Yes (if heavy)

Complete practical example summary

  1. Enqueue main admin script and provide worker URL and nonce via wp_localize_script.
  2. In admin script, detect worker support, create worker, and pass initial config.
  3. For file uploads, read file in chunks in the main thread and send chunks to the worker, or transfer ArrayBuffer for large contiguous data.
  4. In worker, parse/process data, post progress regularly, and post final results. Use fetch from worker to hit REST endpoints when server-side operations are needed. Include the nonce in requests and verify on server.
  5. Provide fallback time-sliced processing if workers arent available.
  6. Consider a pool for parallelizable tasks, but keep worker count reasonable.

Example file list for a plugin implementing this

  • my-plugin.php (main plugin bootstrap)
  • includes/admin-enqueue.php (enqueue and pass data)
  • assets/js/admin-main.js (main admin script that manages worker)
  • assets/js/worker.js (worker code)
  • assets/css/admin.css (UI styles)

Final checklist before shipping

  • Confirm nonces and permission checks for any endpoints used by workers.
  • Test on target browsers and ensure graceful fallback.
  • Test with large real-world datasets to measure memory and CPU usage.
  • Limit workers by CPU cores and implement termination when not needed.

References



Acepto donaciones de BAT's mediante el navegador Brave :)



Leave a Reply

Your email address will not be published. Required fields are marked *