I’m not a fan of time metrics or bounce rate – here’s why

How much time do visitors spend on our site?

If we make this change, how much more time will people spend looking at our website?

Did that change have an impact on bounce rate?

As an analyst, do you get these questions a lot? I know I do. And my response is (almost) always the same… I will not report on those metrics and we should look elsewhere for something more meaningful.

Why? Well… for many reasons. But first some definitions.

Time on page: calculated by the first time stamp of landing on a page subtracted from the next time stamp when going to a new page (alternatively, the time stamp on exit of that page when continuing on to another page tracked within the same analytics account).
Time on site: calculated by the first time stamp of landing on a website subtracted from the last time stamp when exiting from that website (note: website in this case means the same tracked analytics ID/account)
Bounce rate: calculated by visitors who land on your site and do nothing else before leaving

One of the biggest concerns I have with time metrics is that they only count those who don’t ‘bounce’ from your website. So, if you have a website with a 75% bounce rate, your time metrics will only be counted from the remaining 25% of visitors who did not ‘bounce.’ Given this, I’d argue that your time metrics do not accurately represent the engagement of your website.

Here are my other top concerns:

1. These metrics haven’t aged well.

When the analytics industry was young, and when web 2.0 was a new phenomenon that made all our websites look the same, these metrics actually carried some meaning. With similar site layout, content, and engagement, you could compare how pages did against each other or how changes made an impact on the time spent.

BUT – web design has changed. Our pages look and act very differently. Visitors also look and act very differently.

The web industry has learned (or perhaps catered to) the fact that visitors have ADD. They may not go deep into your site and they want the relevant info to meet them when they land on a website. Thus, a lot of websites are now long scrolling pages (also a side effect of mobile design – see next point) with a ton of content on the homepage. Also, many of the actions we ask our visitors to do will immediately take them away from our website. Further, much of our content may not even be hosted on our websites any more (instead, it may be on blogs, social media, or other comms). All of this together makes it much more difficult to measure time on site or bounce rate. And even when we can measure it accurately, it doesn’t carry the same meaning or measure of impact given how different the web looks today.

2. They don’t play well with responsive design

Responsive design has been all the rage the last couple of years. Given that, many of our websites are now long scrolling pages that conform to the screen size they are being viewed on. In many ways this is good for the user – beautiful websites irrespective of screen size, a lot of information up front, and a full site experience on smaller devices. But these are many of the same reasons why it’s now much harder to measure time on site/bounce rate (see previous reason regarding web design).

3. ‘Hacking’ your way into these metrics is not a good solution

If you do a quick search for measuring bounce rate, you’ll find plenty of articles and blog posts written about hacking your way into this. This can be accomplished in many ways, some of the most popular being firing events in the background after a certain time stamp or on page scroll. I’m personally not at all a fan of doing this for a few good reasons: First, you will blow up your site event data (sending a lot more events will inflate overall event metrics and make it more difficult to filter down to relevant data). Second, I’m not sure I agree that spending a certain numbers of seconds on a page or scrolling down a bit counts as interaction (or not bouncing).

There are still many more reasons why I do not believe in reporting on bounce rate or time on site, but these cover my biggest concerns. Michele Kiss also has a good blog post on this topic, check it out for her rants on these metrics.

A final note – there are actually times when these metrics can be a directional indicator of impact (and I’ve used them), but there are many caveats, including type of analytics implementation, how the metric is being used, and incoming traffic quality that will affect how reliable time on site & bounce rate may be. Be careful if and when you do go down this path.

Leave a Reply

1 Comment

  1. Pingback: Digital Debrief – Analyzing Reports in the new App + Web Property

Next ArticleBuilding a Culture of Optimization, Part 1 - Educating the Basics