How to Reduce JavaScript Execution Time
How to Reduce JavaScript Execution Time for Better INP and LCP
JavaScript execution time is the root cause of most INP failures and a major contributor to slow LCP. When JavaScript runs too long on the main thread, the browser cannot respond to user interactions or paint visible content. The fixes are: break up long tasks, remove unused JavaScript, defer non-critical scripts, replace heavy libraries with lighter alternatives, and optimise framework re-renders. Most sites can reduce JavaScript execution time by 40–70% without rewriting their codebase.
Every millisecond your JavaScript runs on the main thread is a millisecond the browser cannot do anything else.
It cannot respond to a click. It cannot paint a new frame. It cannot process a user's keypress. The browser is single-threaded — JavaScript execution and everything else compete for the same resource.
This is why JavaScript execution time is central to both INP and LCP failures. Long JavaScript tasks block interactions — causing poor INP. JavaScript in the critical loading path blocks rendering — causing poor LCP. Fix JavaScript execution time, and you fix both simultaneously.
If you are new to Core Web Vitals or still working out which metric is failing on your site, start with the complete Core Web Vitals fix guide. If you have already confirmed JavaScript is your problem, you are in the right place.
Why JavaScript execution time matters so much in 2026
Since Google replaced FID with INP in March 2024, JavaScript execution time has become the single most important performance variable for most sites. As explained in INP vs FID: what changed and why it matters for SEO, INP measures every interaction across the entire page session — not just the first click. Every piece of JavaScript running on the main thread is a potential INP failure waiting to happen.
The scale of the problem is larger than most developers realise:
- The median mobile web page ships over 400KB of JavaScript
- On a mid-range Android device, 400KB of JavaScript takes 3–5 seconds to parse and execute
- Every long task created by that JavaScript is a potential INP failure
- Third-party scripts account for approximately 45% of total JavaScript execution time on the average page
The good news: JavaScript execution time is one of the most improvable performance metrics. Unlike server response time (which requires infrastructure changes) or image formats (which require asset pipelines), most JavaScript problems can be fixed with code-level changes that do not require new tools or platforms.
How to measure your JavaScript execution time
Before fixing anything, measure exactly where the time is being spent.
Method 1: PageSpeed Insights
Run your URL through pagespeed.web.dev. In the Diagnostics section, look for:
- "Reduce JavaScript execution time" — lists every JavaScript file with its execution time, parse time, and compile time
- "Avoid long main-thread tasks" — lists tasks over 50ms with their duration
- "Reduce the impact of third-party code" — lists third-party scripts specifically, with blocking time
These three diagnostics together give you a complete picture of your JavaScript execution problem.
Method 2: Chrome DevTools Performance panel
The Performance panel gives you the most detailed view of JavaScript execution:
- Open Chrome DevTools (F12) → Performance panel
- Click the gear icon → enable "CPU throttling: 4x slowdown" (simulates a mid-range Android)
- Click Record, interact with your page (click buttons, open menus, use filters)
- Stop recording
- In the Main thread flame chart, look for:
- Long yellow bars — JavaScript execution (long tasks)
- Red triangles at the top of bars — tasks over 50ms flagged as long tasks
- Purple bars — style recalculation and layout (often triggered by JavaScript)
The flame chart shows you exactly which function is running, which function called it, and how long it took. This is where you identify the specific code causing your INP failures.
Method 3: Chrome DevTools Coverage panel
- Open DevTools → press Ctrl+Shift+P → type "Coverage"
- Click Start instrumenting coverage and reload
- Interact with your page normally for 30–60 seconds
- Stop — the Coverage panel shows every JavaScript file with a percentage of unused code in red
Files with 60%+ unused code are strong candidates for code splitting or removal. You are making every user download and parse code that was never executed.
Method 4: Web Vitals Chrome extension
Install the Web Vitals extension and browse your site normally. It shows real-time INP measurements after each interaction — updated in real time. When you find an interaction with a high INP value, switch to the Performance panel in DevTools and record that specific interaction to identify the JavaScript causing the delay.
Fix 1: Break up long tasks with a scheduler.yield()
A long task is any JavaScript task that runs for more than 50ms on the main thread. During a long task, the browser cannot process user input. If a user clicks a button while a 200ms task is running, the click response is delayed by up to 200ms — directly contributing to a poor INP score.
The solution is to break long tasks into smaller chunks and yield the main thread back to the browser between them.
The modern approach: scheduler.yield()
// Bad — one long synchronous task blocking the main thread
async function filterProducts(products, criteria) {
const results = [];
for (const product of products) {
if (matchesCriteria(product, criteria)) {
results.push(product);
}
}
renderResults(results);
}
// Good — yields between chunks so browser can handle interactions
async function filterProducts(products, criteria) {
const results = [];
const CHUNK_SIZE = 50;
for (let i = 0; i < products.length; i++) {
if (matchesCriteria(products[i], criteria)) {
results.push(products[i]);
}
// Yield every 50 items — gives browser chance to handle interactions
if (i % CHUNK_SIZE === 0 && i > 0) {
await scheduler.yield();
}
}
renderResults(results);
}scheduler.yield() pauses execution and allows the browser to process any pending user interactions before continuing. The work still gets done — but in a way that does not block the user.
Fallback for broader browser support
scheduler.yield() has good but not universal browser support. Use this polyfill for complete coverage:
// Polyfill for scheduler.yield()
function yieldToMain() {
if ('scheduler' in window && 'yield' in scheduler) {
return scheduler.yield();
}
return new Promise(resolve => setTimeout(resolve, 0));
}
// Use in your code
async function processLargeTask(items) {
for (let i = 0; i < items.length; i++) {
processItem(items[i]);
if (i % 50 === 0) {
await yieldToMain();
}
}
}Using requestIdleCallback for non-urgent work
For work that does not need to happen immediately — analytics processing, prefetching, background data preparation — use requestIdleCallback to defer it until the browser is genuinely idle:
// Schedule non-urgent work for idle time
requestIdleCallback(() => {
prefetchNextPageData();
updateAnalyticsQueue();
preloadSecondaryImages();
}, { timeout: 2000 }); // run within 2 seconds even if browser never goes idleFix 2: Audit and remove unused JavaScript
The fastest JavaScript is JavaScript that never runs. Before optimising your existing code, audit what you can remove entirely.
Step 1: Identify unused scripts
Run the Coverage panel audit (described in the measurement section). Export the results and focus on files with the highest percentage of unused code that are also large in size.
The combination that matters is: large file + high unused percentage = biggest opportunity.
A 500KB file that is 70% unused wastes 350KB. A 10KB file that is 90% unused only wastes 9KB. Prioritise the large files.
Step 2: Audit your npm dependencies
Use bundlephobia.com to check the size of every npm package in your project. Many packages are far larger than their functionality warrants:
| Package | Full Size (gzipped) | Common Alternative | Size |
|---|---|---|---|
| Moment.js | 72KB | Day.js | 2KB |
| Lodash (full import) | 71KB | Lodash (cherry-picked) | 1–5KB |
| jQuery | 30KB | Vanilla JS | 0KB |
| Axios | 11KB | Fetch API (native) | 0KB |
| Bootstrap JS | 16KB | Alpine.js | 4KB |
| Font Awesome (full) | 80KB | Phosphor Icons (selected) | 2–8KB |
| Chart.js (full) | 60KB | Chart.js (tree-shaken) | 20–30KB |
Replacing Moment.js with Day.js alone saves 70KB of parsed and executed JavaScript — enough to move some sites from a failing INP to a passing one on mobile devices.
Step 3: Cherry-pick imports instead of importing entire libraries
// Bad — imports entire lodash library (71KB)
import _ from 'lodash';
const sorted = _.sortBy(items, 'price');
// Good — imports only the function you need (2KB)
import sortBy from 'lodash/sortBy';
const sorted = sortBy(items, 'price');// Bad — imports all icons from Font Awesome
import { library } from '@fortawesome/fontawesome-svg-core';
import { fas } from '@fortawesome/free-solid-svg-icons';
library.add(fas);
// Good — imports only the icons you actually use
import { faSearch, faUser, faCart } from '@fortawesome/free-solid-svg-icons';
library.add(faSearch, faUser, faCart);Fix 3: Code-split your JavaScript bundles
Code splitting divides your JavaScript into smaller chunks that load on demand — only when they are actually needed. Instead of loading your entire application on the initial page load, users download only the code required for the current page.
Dynamic imports (framework-agnostic)
// Bad — loads entire module on page load
import { HeavyChartComponent } from './charts/HeavyChartComponent';
// Good — loads only when the chart section becomes visible
async function loadChart() {
const { HeavyChartComponent } = await import('./charts/HeavyChartComponent');
const chart = new HeavyChartComponent(document.getElementById('chart'));
chart.render(data);
}
// Trigger on scroll into view
const observer = new IntersectionObserver((entries) => {
if (entries[0].isIntersecting) {
loadChart();
observer.disconnect();
}
});
observer.observe(document.getElementById('chart'));React lazy loading
import { lazy, Suspense } from 'react';
// Bad — loads ProductReviews on every page load
import ProductReviews from './ProductReviews';
// Good — loads ProductReviews only when rendered
const ProductReviews = lazy(() => import('./ProductReviews'));
function ProductPage() {
return (
<div>
<ProductDetails />
<Suspense fallback={<div>Loading reviews...</div>}>
<ProductReviews /> {/* Only loads when this renders */}
</Suspense>
</div>
);
}Next.js dynamic imports
import dynamic from 'next/dynamic';
// Load heavy components only when needed
const HeavyFilterPanel = dynamic(
() => import('./HeavyFilterPanel'),
{
loading: () => <FilterSkeleton />,
ssr: false // Skip server-side rendering for client-only components
}
);
// Load map component only on client (no SSR needed)
const InteractiveMap = dynamic(
() => import('./InteractiveMap'),
{ ssr: false }
);Webpack bundle splitting configuration
// webpack.config.js
module.exports = {
mode: 'production',
optimization: {
splitChunks: {
chunks: 'all',
cacheGroups: {
// Separate vendor code into its own chunk
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all',
},
// Separate large libraries into their own chunks
charts: {
test: /[\\/]node_modules[\\/](chart\.js|d3|recharts)[\\/]/,
name: 'charts',
chunks: 'async', // Only load when actually used
},
},
},
},
};Fix 4: Defer and delay non-critical JavaScript
Not all JavaScript needs to run immediately. Categorising your scripts by when they actually need to execute is one of the most impactful fixes available — and requires no code changes to the scripts themselves.
The four loading strategies
<!-- 1. Synchronous — blocks rendering (avoid in <head>) -->
<script src="critical-polyfill.js"></script>
<!-- 2. Defer — downloads in parallel, executes after HTML parsed -->
<script src="theme.js" defer></script>
<!-- 3. Async — downloads in parallel, executes immediately when ready -->
<script src="https://www.googletagmanager.com/gtag/js" async></script>
<!-- 4. Module — deferred by default, supports tree shaking -->
<script src="app.js" type="module"></script>Load non-essential scripts after user interaction
For scripts that are never needed until a user does something — chat widgets, exit-intent popups, advanced personalisation — delay loading until the first interaction:
const scriptsToDelay = [
'https://widget.intercom.io/widget/APP_ID',
'https://static.hotjar.com/c/hotjar-ID.js',
'https://cdn.amplitude.com/libs/amplitude-8.18.4-min.gz.js'
];
let scriptsLoaded = false;
function loadDelayedScripts() {
if (scriptsLoaded) return;
scriptsLoaded = true;
scriptsToDelay.forEach(src => {
const script = document.createElement('script');
script.src = src;
script.async = true;
document.head.appendChild(script);
});
}
// Load after first interaction
['mousedown', 'touchstart', 'keydown', 'scroll', 'wheel'].forEach(event => {
document.addEventListener(event, loadDelayedScripts, { once: true, passive: true });
});
// Fallback — load after 5 seconds even without interaction
setTimeout(loadDelayedScripts, 5000);This pattern is especially effective for chat widgets and heatmap tools. As covered in fix CLS from dynamic ads and embeds, delaying these scripts eliminates both their CLS impact and their INP impact simultaneously.
Fix 5: Optimise React and framework re-renders
React, Vue, and Angular applications are the most common sources of long JavaScript tasks on interaction. Every state update triggers a reconciliation process — React compares the new virtual DOM with the previous one and updates only what changed. On complex component trees, this reconciliation can take 200–500ms — directly causing INP failures.
React. memo — prevent unnecessary re-renders
// Bad — FilterTag re-renders every time parent state changes
function FilterTag({ tag, onRemove }) {
console.log('FilterTag rendered:', tag); // renders on every parent update
return (
<span className="tag">
{tag}
<button onClick={() => onRemove(tag)}>×</button>
</span>
);
}
// Good — FilterTag only re-renders when its own props change
const FilterTag = React.memo(function FilterTag({ tag, onRemove }) {
console.log('FilterTag rendered:', tag); // only renders when tag or onRemove changes
return (
<span className="tag">
{tag}
<button onClick={() => onRemove(tag)}>×</button>
</span>
);
});useCallback — stable function references
// Bad — new function created on every render, breaking React.memo
function ProductList({ products }) {
const [filters, setFilters] = useState([]);
// New function reference every render — React.memo on children is useless
const handleRemoveFilter = (tag) => {
setFilters(prev => prev.filter(f => f !== tag));
};
return filters.map(tag => (
<FilterTag key={tag} tag={tag} onRemove={handleRemoveFilter} />
));
}
// Good — stable function reference, React.memo works correctly
function ProductList({ products }) {
const [filters, setFilters] = useState([]);
// Same function reference across renders — React.memo works
const handleRemoveFilter = useCallback((tag) => {
setFilters(prev => prev.filter(f => f !== tag));
}, []); // empty deps — function never changes
return filters.map(tag => (
<FilterTag key={tag} tag={tag} onRemove={handleRemoveFilter} />
));
}useMemo — cache expensive calculations
// Bad — expensive calculation runs on every render
function ProductGrid({ products, sortBy, filters }) {
// Runs every time any state or prop changes
const processedProducts = products
.filter(p => matchesFilters(p, filters))
.sort((a, b) => compareBy(a, b, sortBy))
.map(p => enrichWithMetadata(p));
return <Grid items={processedProducts} />;
}
// Good — calculation only runs when its inputs change
function ProductGrid({ products, sortBy, filters }) {
const processedProducts = useMemo(() => {
return products
.filter(p => matchesFilters(p, filters))
.sort((a, b) => compareBy(a, b, sortBy))
.map(p => enrichWithMetadata(p));
}, [products, sortBy, filters]); // only recalculates when these change
return <Grid items={processedProducts} />;
}useTransition — mark non-urgent updates
React 18's useTransition hook lets you mark state updates as non-urgent. The browser can interrupt them to handle user interactions — preventing the update from blocking the main thread:
import { useState, useTransition } from 'react';
function SearchableProductList({ products }) {
const [query, setQuery] = useState('');
const [filteredProducts, setFilteredProducts] = useState(products);
const [isPending, startTransition] = useTransition();
function handleSearch(e) {
const value = e.target.value;
setQuery(value); // urgent — update input immediately
startTransition(() => {
// non-urgent — can be interrupted if user types again
setFilteredProducts(products.filter(p =>
p.name.toLowerCase().includes(value.toLowerCase())
));
});
}
return (
<>
<input value={query} onChange={handleSearch} />
{isPending && <LoadingSpinner />}
<ProductGrid products={filteredProducts} />
</>
);
}With useTransition, The input stays responsive while the expensive filter operation runs in the background. If the user types again before the filter finishes, React cancels the previous filter and starts a new one — preventing stale renders from blocking interactions.
Fix 6: Reduce DOM size
Large DOM sizes are a hidden JavaScript performance killer. Every time JavaScript modifies the DOM, the browser recalculates styles and layout for every affected element. The larger the DOM, the more work the browser does per modification.
PageSpeed Insights flags pages with more than 1,400 DOM nodes. In practice, complex ecommerce pages often have 3,000–8,000 nodes — causing style recalculations that take 100–300ms per interaction.
How to measure your DOM size
// Run in browser console to count DOM nodes
console.log('Total DOM nodes:', document.querySelectorAll('*').length);
// Find the deepest nesting
function getMaxDepth(element, depth = 0) {
if (!element.children.length) return depth;
return Math.max(...Array.from(element.children).map(
child => getMaxDepth(child, depth + 1)
));
}
console.log('Max DOM depth:', getMaxDepth(document.body));DOM size reduction strategies
Virtualise long lists: If you are rendering more than 50–100 items in a list, only render the items currently visible in the viewport:
// Install: npm install react-window
import { FixedSizeList } from 'react-window';
// Bad — renders all 10,000 products as DOM nodes
function ProductList({ products }) {
return (
<div>
{products.map(product => <ProductCard key={product.id} product={product} />)}
</div>
);
}
// Good — only renders ~20 products visible in viewport at any time
function ProductList({ products }) {
return (
<FixedSizeList
height={600}
itemCount={products.length}
itemSize={200}
width="100%"
>
{({ index, style }) => (
<ProductCard
key={products[index].id}
product={products[index]}
style={style}
/>
)}
</FixedSizeList>
);
}Remove hidden elements from the DOM:
Elements with display: none or visibility: hidden still exist in the DOM and contribute to style recalculation time. For elements that are rarely shown (modal dialogs, off-screen panels, conditional sections), remove them from the DOM entirely when not visible:
// Bad — modal always in DOM even when hidden
function Page() {
const [showModal, setShowModal] = useState(false);
return (
<>
<MainContent />
<Modal style={{ display: showModal ? 'block' : 'none' }} />
</>
);
}
// Good — modal only added to DOM when needed
function Page() {
const [showModal, setShowModal] = useState(false);
return (
<>
<MainContent />
{showModal && <Modal onClose={() => setShowModal(false)} />}
</>
);
}Flatten unnecessarily deep nesting:
Every layer of DOM nesting adds to style recalculation cost. Audit your markup for <div> wrappers that serve no structural purpose:
<!-- Bad — 6 levels of nesting for a simple card -->
<div class="outer">
<div class="wrapper">
<div class="container">
<div class="card">
<div class="card-body">
<div class="card-content">
<p>Product name</p>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Good — 2 levels, same visual result -->
<div class="card">
<p class="card-content">Product name</p>
</div>Fix 7: Optimise third-party JavaScript
Third-party scripts account for nearly half of the total JavaScript execution time on the average web page. They run on your main thread, compete with your own code for CPU time, and you have limited control over what they do.
Run PageSpeed Insights and expand "Reduce the impact of third-party code". This shows every third-party script with its transfer size and main-thread blocking time.
Focus on the scripts with the highest blocking time — not transfer size. A 50KB script that blocks for 800ms is more damaging to INP than a 200KB script that blocks for 100ms.
Partytown is a library that relocates third-party scripts to a web worker — completely off the main thread:
<!-- Install Partytown, then use type="text/partytown" -->
<script>
// Partytown config
partytown = {
forward: ['dataLayer.push', 'fbq', 'gtag']
};
</script>
<script src="/~partytown/partytown.js"></script>
<!-- Third-party scripts run in web worker instead of main thread -->
<script type="text/partytown" src="https://www.googletagmanager.com/gtag/js?id=G-XXXX"></script>
<script type="text/partytown" src="https://connect.facebook.net/en_US/fbevents.js"></script>Scripts running in a web worker cannot block the main thread — they run in parallel. User interactions are never delayed by analytics or marketing pixel execution.
Standard Google Analytics 4 loads 45KB+ of JavaScript and creates multiple long tasks during a session. For sites where detailed analytics matter less than performance, consider lighter alternatives:
| Analytics Tool | Script Size | Long Tasks |
|---|---|---|
| Google Analytics 4 | 45KB+ | Multiple |
| Plausible Analytics | 1KB | None |
| Fathom Analytics | 2KB | None |
| Simple Analytics | 3KB | None |
| Umami | 2KB | None |
Switching from GA4 to Plausible or Fathom eliminates 43KB of JavaScript and multiple long tasks — a significant INP improvement for analytics-heavy sites.
JavaScript execution time by platform
WordPress
WordPress JavaScript execution problems come from three sources: plugins, themes, and third-party scripts.
Plugin JavaScript audit: Install the Query Monitor plugin. It shows every script loaded on each page with its size. Go through the list and identify scripts from plugins you use on only some pages — exclude those scripts from pages where the plugin is not active.
Use WP Rocket's "Load JavaScript deferred" setting to defer all non-critical scripts automatically. This is the fastest WordPress fix for JavaScript execution time without touching code.
Page builder JavaScript: Elementor, Divi, and WPBakery load 200–500KB of JavaScript on every page. If performance is a priority, consider migrating to a block-based theme with GeneratePress or Kadence — which load under 50KB of JavaScript total.
Shopify
Each installed Shopify app typically adds 50–200ms of JavaScript execution time. A store with 15 apps can have 1,500–3,000ms of app-related JavaScript execution per page visit.
The highest-impact Shopify fix is app auditing:
- Go to your Shopify admin → Apps
- For each app, ask: is this actively generating revenue or solving a problem?
- Uninstall any app that is not earning its performance cost
- For apps you keep, check if they offer a "lightweight" or "performance mode" in their settings
Next.js and React
Use Next.js's built-in bundle analyser to find oversized chunks:
# Install
npm install @next/bundle-analyzer
# Add to next.config.js
const withBundleAnalyzer = require('@next/bundle-analyzer')({
enabled: process.env.ANALYZE === 'true',
});
module.exports = withBundleAnalyzer({});
# Run
ANALYZE=true npm run buildThis opens an interactive treemap of your JavaScript bundles. Find the largest chunks and identify which dependencies are causing them. This is the fastest way to find JavaScript bloat in a Next.js application.
How JavaScript execution time connects to your other Core Web Vitals
Reducing JavaScript execution time has the broadest positive impact of any single optimisation — it improves all three Core Web Vitals simultaneously.
JavaScript → LCP: Render-blocking scripts in the critical path delay the browser from painting the LCP element. Deferring and splitting JavaScript is a direct LCP fix. See how to eliminate render-blocking resources for the full render-blocking fix guide, and how to preload your LCP image for how image loading fits into the same critical path.
JavaScript → INP: Long JavaScript tasks block interactions. Every fix in this article directly improves INP. Combined with the INP vs FID transition context, reducing JavaScript execution time is the primary lever for INP improvement.
JavaScript → CLS: JavaScript that modifies the DOM after the initial render causes layout shifts. Deferring DOM-modifying scripts, fixing dynamic ad injection, and preventing late-loading content are all covered in fix CLS from dynamic ads and embeds, and what causes cumulative layout shift.
JavaScript → Server load: Excessive client-side JavaScript often signals a missing server-side optimisation. For sites relying on client-side rendering for content that could be server-rendered, moving work to the server improves both server response time and client-side JavaScript execution simultaneously.
The complete Core Web Vitals guide ties all three metrics and their JavaScript connections together in one place.
JavaScript execution time checklist
| Fix | Effort | INP Impact | LCP Impact | Works Without a Developer |
|---|---|---|---|---|
| Defer non-critical scripts | Low | High | High | Yes |
| Remove unused third-party scripts | Low | Very high | Medium | Yes |
| Add scheduler.yield() to long tasks | Medium | Very high | Low | No |
| Cherry-pick library imports | Medium | High | Medium | No |
| Replace heavy libraries (Moment → Day.js) | Medium | High | High | No |
| Implement React.memo and useCallback | Medium | Very high | Low | No |
| Code split with dynamic imports | Medium | High | High | No |
| Virtualise long lists | Medium | High | Low | No |
| Move scripts to Partytown web worker | Medium | High | Medium | Yes |
| Switch to lightweight analytics | Low | High | Medium | Yes |
| Reduce DOM size | High | High | Medium | No |
Frequently asked questions
Q1. What JavaScript execution time is considered too slow?
Google flags JavaScript execution time above 2 seconds in PageSpeed Insights. However, any individual task over 50ms is a "long task" and a potential INP failure point. For INP specifically, the entire interaction — input delay + processing time + rendering time — needs to complete within 200ms for a Good score.
Q2. Does JavaScript execution time affect my Google rankings?
Yes, indirectly. JavaScript execution time directly determines your INP score. INP is a Core Web Vital and an official Page Experience ranking signal. Slow JavaScript execution → poor INP → weaker Page Experience signal → potential ranking disadvantage in competitive results.
Q3. My site uses React. Is it always going to have slow JavaScript execution?
No. React can be highly performant when optimised correctly. The issues come from unoptimised re-renders, large bundle sizes, and synchronous state updates. With React.memo, useCallback, useMemo, useTransition, dynamic imports, and proper code splitting, React applications can achieve excellent INP scores.
Q4. How do I reduce JavaScript execution time in WordPress without a developer?
The three no-code WordPress fixes are: install WP Rocket and enable JavaScript deferral, audit and uninstall unused plugins, and switch to a lightweight theme. These three changes alone typically reduce JavaScript execution time by 40–60% on most WordPress sites.
Q5. Is it safe to defer all JavaScript?
No. Some scripts must remain synchronous — particularly scripts that other scripts depend on, and scripts that must modify the page before the first render (A/B testing tools, personalisation scripts). Test thoroughly after deferring scripts and check for broken functionality. As a rule, defer everything in the body first, then cautiously work through head scripts.
Q6. What is the difference between JavaScript execution time and JavaScript transfer size?
Transfer size is how large the JavaScript file is when downloaded. Execution time is how long the browser takes to parse, compile, and run the JavaScript. Both matter, but they are different problems. A large file with simple code can execute quickly. A small file with complex code can execute slowly. PageSpeed Insights reports both separately — focus on execution time for INP fixes and transfer size for LCP fixes.
Q7. How does image optimisation connect to JavaScript execution time?
Image loading and JavaScript execution are separate browser processes — but they compete for network bandwidth and CPU time during the critical load window. A well-optimised LCP image (see what is a good LCP score for the full breakdown is) combined with reduced JavaScript execution time, gives both metrics the resources they need without competing.
Summary
JavaScript execution time is the root cause of most INP failures and a significant contributor to slow LCP. The fix strategy in priority order:
- Measure first — use PageSpeed Insights and Chrome DevTools Coverage to identify exactly which scripts are causing the most damage
- Break up long tasks — add
scheduler.yield()to any synchronous task running over 50ms - Remove unused JavaScript — audit with Coverage panel, remove dependencies, cherry-pick imports
- Code split — use dynamic imports and React. lazy to load code only when needed
- Defer everything possible — synchronous scripts in
<head>should be the exception, not the rule - Optimise framework re-renders — add React. memo, useCallback, useMemo, and useTransition
- Audit third-party scripts — remove what you do not need, move what you need to Partytown
Every fix in this list improves INP. Most also improve LCP. Several also improve CLS. JavaScript execution time optimisation has the highest return on investment of any single performance category in 2026.
