Attendee

Performance Now

I spent the day at

performance.now()
in Amsterdam, a conference fully focused on web performance. I have to say, conferences like this really remind me why I love what I do. Being surrounded by people who care deeply about the web, seeing how much expertise and thought goes into solving problems at this level — it’s inspiring. Honestly, this has been the best conference I’ve attended so far. Being in a room full of smart engineers makes me want to dive deeper, learn more, and grow into more specialised areas of the web.

The speakers were from companies like Google, Mozilla and Web Standards, and the talks reflected that depth. I left with a lot to think about, not just tools to try, but ideas on how to approach performance work more deliberately.

Here are the talks that stood out most, and what I’ll be taking back into my workflow.

Harry Roberts — How to Think Like a Performance Engineer

Harry’s talk was a reminder of something I think many of us forget: performance testing is only useful if it’s set up to reflect reality.

The main point: how you run a test is often more important than which tool you use.

Before I dive into the highlights, a bit of context for those who aren’t familiar with the tools he mentioned:

  • CrUX (Chrome User Experience Report)
    :
    a dataset of real-user performance metrics collected from Chrome users around the world. It measures things like loading speed, interactivity, and visual stability, across real devices and network conditions. This is what Harry meant when he talked about “real-world data” — it shows how actual users experience your site, not how it behaves in an ideal lab environment.
  • Treo
    :
    a tool for visualizing CrUX data. It lets you explore metrics across pages, devices, and regions, giving a clear picture of where performance issues really matter. Instead of guessing, you can see which pages are slow for real users and prioritise improvements accordingly.
  • WebPageTest
    :
    a long-standing tool for controlled performance testing. It allows you to run tests on specific devices, connections, and locations. Unlike CrUX, which is observational, WebPageTest is lab-based, but you can combine it with scripts to simulate real user actions, like clicking buttons or pre-populating a shopping cart. This helps replicate real user journeys in a repeatable way.

Key takeaways

  • Use CrUX + Treo to understand real-user performance, and WebPageTest to test reproducibly in controlled conditions. Together, they give a full picture of performance.
  • Agree on test conditions upfront: which URLs, devices, connection speeds, and geographic regions you’ll measure. Otherwise results are inconsistent and hard to compare.
  • Core Web Vitals (like LCP, FID, CLS) are essential, but also track supporting metrics like TTFB or DOMContentLoaded, particularly for SPAs where script execution timing matters.
  • Aim for the 95th percentile, not 75th — the slowest experiences matter. You don’t want a significant portion of users left with poor performance.
  • Test realistic scenarios: pre-accept cookie banners, pre-populate shopping carts, include soft navigation interactions. Testing a “cold start” page might look good in a lab, but it doesn’t reflect production.

I realised how often I’ve run “clean” tests that don’t match production. They’re faster, but they’re misleading. Harry’s point is simple but easy to overlook: take the time to test properly, and the results are far more useful.

Learn more from Harry: https://bsky.app/profile/csswizardry.com

Michael Hladky — Big Data, Zero JS: Virtual Scrolling & CSS Techniques

Michael’s talk focused on how we can improve performance by leveraging the browser more effectively, instead of automatically reaching for JavaScript solutions. It was highly visual, with live demos, which made the key concepts easy to grasp but hard to take literal notes on.

Here’s what’s important for someone who wasn’t there:

  • CSS
    content-visibility
    : auto

    This is a relatively new CSS property that tells the browser to ignore elements that are off-screen until they are about to be displayed. Normally, the browser calculates layout, paint, and other steps for the entire page. With content-visibility: auto, it can skip off-screen work, which dramatically improves performance for long lists, dashboards, and feeds.
  • CSS
    contain

    This property limits the effect of style, layout, and paint changes to a single element instead of letting them ripple across the page. For example, if a component updates dynamically, contain ensures the rest of the page doesn’t have to recalculate or repaint unnecessarily. This can be especially powerful on heavy pages where small changes can otherwise be expensive.
  • CSS-Triggers
    This is a tool (https://csstriggers.com/) that shows which CSS properties cause layout, paint, or composite events. It’s incredibly useful to understand which CSS changes are “cheap” and which are expensive in terms of performance.
  • Zooming snippets and demos
    Michael demonstrated simple, controlled CSS experiments to show how small layout changes can scale up to large performance gains. While the demos were visual, the lesson is clear: sometimes performance improvements come from helping the browser understand what’s happening, rather than adding more JavaScript.

The takeaway: modern CSS has capabilities that can replace complex JS solutions in many cases. For developers, this means smarter, simpler, and often faster pages, by letting the browser do what it’s already optimized to do.
The creative approach and angle of the solution also stood out to me. It shows that there are loads of directions and areas which can be improved.

Learn more from Michael: https://bsky.app/profile/michael-hladky.bsky.social

Umar Hansa — Modern Performance Workflows

Umar’s talk was more of a practical masterclass in modern DevTools usage and automated workflows. It was packed with actionable insights that could immediately improve the way you work with performance.

Here’s the context for readers unfamiliar with the tools:

  • Performance → Insights panel
    This panel in Chrome DevTools surfaces important information about layout shifts, long tasks, and other performance bottlenecks in a clear way, without having to dig through flame charts manually.
  • Lighthouse Timespan reports
    Lighthouse isn’t just for initial page loads. Timespan mode lets you record a session and measure performance over interactions — for example, opening a modal, navigating a SPA route, or scrolling through a feed. This is crucial for measuring real interactivity metrics like INP (Interaction to Next Paint).
  • Recorder
    DevTools Recorder lets you record user interactions and replay them later while analyzing performance. This makes it easier to test repeatable flows and catch performance issues that only appear during specific interactions.
  • Network throttling by URL
    Previously, throttling was site-wide. Now you can throttle requests per resource — e.g., slow fonts or third-party scripts — which is much more realistic and helps uncover bottlenecks you might otherwise miss.
  • AI-assisted DevTools features
    Tools like “Debug with AI” provide suggestions or analysis based on context. These are most effective when combined with understanding the workflow — they supplement, not replace, engineering judgment.
  • MCP agents (Model Context Protocol)
    These are automated agents that can interact with your site through DevTools. In CI/CD pipelines, they can measure performance for user journeys, identify regressions, or provide analysis without manual intervention.

The main lesson: modern DevTools are more powerful than most developers realise. When used thoughtfully, they allow for automated, repeatable, and meaningful performance testing, reducing the risk that slow experiences reach users.

Personally I was really energised and full of ideas after this talk. I'd love to experiment with MCP's, new API's and feature flags/ new releases from Chrome. I think there is lots of places we have not been using to our full potential within the browser, even-though it does have lots of awesome features. It's time to organise a hackathon to discover these grey spaces. I strongly believe this can benefit us, the company and myself.

Barry Pollard — Speculations about Web Performance

One of the absolute highlights for me was the talk by Barry Pollard — a Google engineer who worked directly on the new Speculation Rules API. His depth of knowledge, calm confidence, and ability to clearly explain complex browser architecture was seriously inspiring. You walk out of his session not only understanding a new API, but understanding why it matters and how one engineer can drive global impact.

And for us building high-conversion e-commerce experiences?
This can be quite a game changer. It's that it's Friday night, otherwise I'd dive into the code directly to benchmark possible improvements.


Why Speculation Matters (some background)

For years, browsers have been working with pre-rendering <link rel="prerender"> was a thing in Chrome, then died in favor of NoState Prefetch. Helpful, but not instant.
Now? Full pre-rendering is back — smarter, controllable, and about to go cross-browser.

Imagine a user hovering over a product link — and the browser already has the

next page fully rendered
in the
background
. They click, and boom, instant navigation. No loading. No delay. No patience-test during checkout.

That’s the promise of the Speculation Rules API.

How It Works

You drop JSON "rules" into your page that tell the browser what to:

  • ✅ Prefetch (just the resources)
  • 🚀 Prerender (load the entire page invisibly!)
<script type="speculationrules">
   {
    "prerender": [
        {
            "where": {
                "and": [
                  { "href_matches": "/*" }, 
                  { "not": { "href_matches": "/admin" } }
                ]
            }
        }
    ]
}
</script>

The browser will pre-render pages just in time, based on real intent signals like hover, focus, and interaction heuristics.

Eagerness Levels

Barry explained the eagerness modes — how aggressively Chrome should prerender:

  • conservative → on touch/press intention
  • moderate → hover ~200ms
  • eager → hover ~10ms
  • immediate → pre-render ASAP

Smart for performance + battery + memory. Chrome even cancels and re-queues silently behind the scenes.

Full blog post by Barry: https://developer.chrome.com/docs/web-platform/prerender-pages?hl=en

More from Barry: https://bsky.app/profile/tunetheweb.com

Ecosystem Moment

My favourite part? Hearing Barry talk about the web as a global performance platform. He thinks in “ecosystem wins,” not just Chrome features.

And the best quote-feel moment: Yoav Weiss is implementing this in WebKit for Safari.

Reflections on the conference

The conference was really inspiring. The venue, the Zuiderkerk in the middle of Amsterdam, added a unique atmosphere — surrounded by centuries of history while talking about the future of the web. The organisation by PPK was impressive. From the amazing coffees and snacks to a properly filling lunch (curry, yes!), to even having toiletries available if needed, everything was thought through. It made it easy to focus on the talks, the discussions, and the people around you.

The audience itself was equally inspiring — lots of smart, engaged engineers, a sold-out venue, and speakers who clearly know their craft inside out. There’s something motivating about being surrounded by people who care deeply about the same things you do. It makes you want to learn more, dive deeper, and grow in your own areas of expertise.

If you’re looking for a conference that’s not about hype or “what’s trending,” but about deep, practical learning and being inspired by the best in the field, this one is highly recommended.

Tim Beeren

About Tim Beeren

A Full Stack Developer, passionate about everything that has to do with creating. From coding, to hosting podcasts or composing music - as long as its a balance between technical and creative skills, I'm in! ☕️

Copyright © 2025 Tim Beeren. All rights reserved.