How to optimize video ad timeouts


I’m going to assume you use DFP because it is by far the most common, but the same concepts apply to all ad servers. Additionally, I’m assuming you are using the IMA SDK which is also the defacto way of doing ads with a few exceptions.

If your site uses videos but you are not monetizing them, you are missing out on a huge opportunity. Video advertising is more profitable than display ads (read banners) and modern video players offer advertising as part of their features (most of them through their paid version). If you want a free version that kicks ass you are welcome to try this beauty [Video Block with Ads] (yes, I wrote it #shamelessplug).

Once you start using ads, everything feels great and your ads are firing just dandy. But there is a dirty monster lurking behind the scenes: VAST errors (*).

Most of the times, when there is a VAST error you lost money. Plain and simple. The same applies to banner ads, they just happen to load far more consistently. Not all errors can be fixed or are timeout related, but, assuming your setup is correct, a lot of them will be (301, 402 timeouts). The issue is that the default timeouts are just not that great. Adjusting your timeouts to help these issues could mean a 10% to 20% revenue improvement!

All players (most) have these timeouts available but may be named slightly different:

  • VAST Load Timeout (usual default is 5 secs): How long to wait for the server to respond to a request directly after it is been made. This timeout is linked to IMA SDK (google.ima.AdsRequest).
  • Load Video Timeout (usual default 8secs): After receiving the request above, how long does the player wait for the video ad to start playing. Video Ads are regular videos just like yours, they just happen to have a link. Therefore, the same old-school rules apply like “is this ad too heavy?”. This timeout is particularly important. It is linked directly into the IMA SDK. (google.ima.AdsRenderingSettings)
  • Max Timeout (defaults are all over the place here): This is the time the player allows for the steps above combined and it is the ultimate cutoff. Think of it this way: once this timeout has passed, the advertising mechanics will be skipped altogether and your video will play. Keep in mind, that at this point the ad request has gone through but the ad will not play.

The key with timeouts is balance: You can wait forever on the ads and sacrifice UX to maximize impressions and minimize errors OR you can wait very little to maximize your “video plays” at the cost of impressions. Additionally, you should consider your visitors. To maximize impressions, if they are mostly on mobile devices, timeouts should be higher than if they were on a desktop machine, but mobile users are also less patient.

With that in mind, the first thing you need to ask yourself (and/or your team) is “What is more important revenue or UX?” … Inevitably, the answer is, of course, both. That is not a good answer. Even though you want both, one is always more important than the other. If you really can’t make your mind, the answer is revenue. The issue being nobody wants to say “the hell with UX, we want the money”.

How to

  1. Establish an acceptable error rate (Errors / Page views %). Be realistic! it is impossible to have a 0% error rate. But what is normal you ask? good luck with that too. DFP will tell you one thing, a seasoned adops person will tell you another and some article online will tell you something else. I would say it depends on your setup, you visitors’ tech and you overall goal. Around 20% is decent but can be improved. More than 30% something is up. Less than 10% … teach me sensei.
  2. Find a good sample size (how to): How many page views is enough to conduct experiments. Another way to put this is how long will the experiment run. Very high traffic sites can get good numbers after only a couple of hours, lower traffic sites may want to run for a couple of days. Too short/few = bad conclusions, too long/many = wasted time.
  3. Establish a ceiling. Set all the timeouts to something ridiculously high: 30 / 30 / 60 (vast load timeout / loadVideo timeout / Max timeout) notice the 60s is 30+30 … as in 20 / 20 / 40. Run this for the designated time (#2). This step will be AWFUL for UX in some cases because no video will play for a potential 60 seconds, but will show you the very best you can possibly get impression-wise.
  4. Establish a floor. This is the opposite of the above. Set timeouts to something extremely low 5/5/5. Only people with amazing devices and great internet speed will get ads. Your error rate will shoot way up and your plays will do so too. In an ideal world, ads would be instant which is what we strive for. Again, run this for the same amount of time or page views you did on #3.
  5. Time to move the experiment. The ceiling (#3) will show you how bad or good the defaults are. Now I would move to a middle ground (15/15/15) between ceiling and floor to see how that compares to your acceptable error rate(#1). Run for the designated time(#2), remeasure and readjust until you are happy. Yes, it is slow and stupid, but it works.

It is imperative you compare using percentages (See #1). Absolute numbers are deceiving and will make you think you are doing great when you aren’t.

Tip: On step #5 adjust the timeouts drastically. Tiny little changes will only confuse you because they don’t affect percentages enough. Let’s say you tried 15/15/15 and your error rate ended up being 18%, then you tried 13/13/13 and the error rate is 17.9% or 18.1% is it really better or worse? such a small change could be a consequence of traffic variance or video inventory, or god knows what … so many things. But if the second time around you try 8/8/8 and the error rate is 22% you know you’ve gone down too much, but if it is 17.5% you should probably keep the lower timeout. I would be happy at that point, but it is all about the acceptable number you figured out on step #1.

Keep in mind, crazy high timeouts don’t mean that you will wait crazy long! If the timeouts are 30/30/30 but your connection is good, you will only wait for a couple of seconds for the ad. It is only the edge cases, that have crappy everything, you are trying to figure out. The lower end will affect everybody! make sure you don’t go too low, you are just trying to establish what is the best you can do.

Last but not least: In my experience, the VAST Load Timeout (the first one in the post) is hardly ever a problem. Because this is about your ad server, it is their business to send a response as fast as possible, and they all do. DFP usually responds in milliseconds. Keep this timeout low; 5 seconds is more than enough. The third timeout (Max Timeout) should be only as long as the second (loadVideo timeout) because the first timeout is mostly ignored and the second one is long enough. This seems to work nicely for me 5 / 22 / 20 but all sites are different.

What did you end up doing?

Adding pageviews to WordPress stats on galleries

Jetpack is awesome. Among many features, by default, it runs WordPress stats. While they are not Google Analytics by any means, they do give you a nice view on how your site is doing, with a very small footprint. The event code is just added to your site’s footer and voila! it just works!

The problem is your site’s footer doesn’t know anything. Slideshows, galleries, compilations and one-page apps are good examples of cases where the content of the page changes via Javascript with a click, or perhaps a time-based approach. As far as I know, wp stats doesn’t have a solution for this issue. I, however, have a hack ready to go, that works wonders.

NOTE: This code will not work on your site as is! but it will hopefully give you the general idea on how to do it.

Let’s make 2 assumptions: one, you have jQuery available (not necessary), and two, the page we have is a picture gallery and fires an event “pageChange” every time NEXT or PREV are clicked.

The part that does the pageview of course is:

This code was added on July 24th. Notice the traffic increase after July 25th.

One of the sites I handle has a lot of ajax type slideshows. Traffic itself didn’t increase, it just wasn’t being recorded on wpstats (see chart above)!

Boom! easy! The same concept applies to one-page applications. Make sure to pass the right urls and post ids to WordPress Stats.

Run javascript that depends on multiple remote assets

Async JS is the way of the present (and future). We MUST load all of our scripts async. They are getting heavy and slow, and the browser needs help, specially on mobile devices. Why wouldn’t you want your page to load faster (see this post).

As web pages grow more and more complex, the idea of a single script that does everything is Jurassic. But what if we want to run code that depends on multiple scripts at the same time? If your app/site is complex you should look into requireJS. If you need something less daunting … I got you (with jQuery):

Neat right? … Let’s kick it up a notch. Say we need the script AND some external data file:

And now the cherry on top … we want to load myData.json or yourData.json or just nothing conditionally:


I’m sure you can do something similar on vanilla JS … but if you need something like this, chances are you are running jQuery.

How to make SYNC javascript assets work ASYNC

Speed is everything. It always is … there is no such thing as “the page loaded too fast”. To make matters worse, today we have Google Pagespeed Insights to make our lives miserable (a whole different topic). They usually recommend loading your JS asynchronous (async) or on the footer.


Javascript is render blocking, which means the browser can’t work on displaying the page while the script is running. So, consider this:

If myscript.js takes 1 minute to load, the whole page will be delayed 1 minute, no matter how simple it is. Not cool. To fix this, we just add async to the script tag. This makes the browser start the load of the asset (myscript.js) but continue parsing html. The problem now is that because the asset is still loading, and will arrive in 1 minute, the function changeText() is not available by the time the browser gets to it. Your code will not run, but the page has loaded, which is good news.



You can implement a queue or a callback*. Both ways work just as nicely, but they have different use cases. (* you can use Promises but that’s kind of a callback, just real fancy)

Queues work better if you need to do many calls to the same script in different parts of the page but all depend on the previous call one way or another. Here you go (Notice the change on the js file):

Tiny change … works like a charm … once myscript.js loads 1 minute later, it’ll execute all the functions pushed to the queue.

Potential pitfall 
Lets say instead of the script loading slow, it loads super fast. If you are working on the DOM, make sure the elements you want, are in place by the time the queue executes. In the example this is mitigated by adding to the queue after the dom element ( <div> ).

Implementing a callback is a bit easier, but works better if you only need to run code once. Notice, no changes are needed on myscript.js from the original version.

You can make a hybrid of these 2 approaches where the callback is the code to process the queue … but it feels a bit ghetto to me. I’m sure there is some good use case where this would be desirable. The same pitfall as above applies, but as long as you load the script after the dom element you’ll be ok.

As always there is a jQuery way to do this. And it is in fact quite nice, what I don’t like about it is having to load jQuery synchronous. That being said, you can probably turn all your jquery into a queue and now “we all happy“.


When caching is not enough: “Double Buffered” Remote Calls

One of the challenges of running WordPress at scale is dealing with API calls to (insert_external_service_here). Using wp_remote_get (or curl) is probably your go-to method for API calling and this is a fine function for a low traffic site. On a site that gets millions of pageviews, it is just not going to cut it. You will inevitably run into race conditions.

In case you don’t know, a race condition is when person1 is waiting on the server to finish the api call, then person2 makes another call, then person3 … then person X, but person1 is still waiting. If the API server is being slow, there could be a queue of thousands waiting, at that point, your server has crashed for sure. wikipedia

Another reason for not using wp_remote_get on every request is API limiting. Some services do not allow more than X calls per second/minute/day. If you make a call for every visit, you will surely reach that limit extremely fast!

Simple Solution: Caching.

By caching your call the “traditional” way, you’ve now gone way ahead from where you started. The API will only happen every 5 minutes, and people will not have to wait for the results as you have them stored already! This is just perfect for medium traffic sites and fast response APIs.

The problem with this approach is that at the 5 minute mark you still need to wait for the API to respond. If the response is slow you could run into a race condition again, because cache is invalidated, and it goes like so: person1 triggers cache invalidation (past 5 minutes) and calls API, person2 calls API too because cache is not valid, person3 same ….. person100 same, person1’s call is done and cache is set again for the next 5 minutes, person101 gets a cached result everyone is happy from here on, in the meantime, persons2-100 are still waiting on the response slowly. We have somewhat mitigated the problem, but not completely solved it. If the traffic is really high and the API is really slow, your server could crash.

If you have that kind of traffic, you are playing with the big boys. Lazy caching is not going to be enough.

Complex Solution: Double Caching

Instead of just caching the result, you can double cache it. To do so, we are doing the same thing as above, but twice. And we’ll do it in a way where cache1 lives for 5 minutes and cache2 lives for 10 minutes. When cache1 invalidates, that person makes the api call and sets a switch so everyone else uses cache2. Now only person1 is slow.

This is pretty great as is, but we can do better. Let’s say the API starts to malfunction for whatever reason. In that scenario, you’ll have good data for 5 minutes until the next time you do the call … not so awesome. Add a data consistency check (which you SHOULD have), and now we are in business:

The only caveat with that approach is if your backup cache invalidates. A neat alternative, save the backup in an option:

“Double Buffer”

Boom! Now even if the API fails or nobody accesses the page and cache dies, your site still has content, all you have to do now is find a way to alert you of the problem.

Overkill? yes … Works? … Really well!

Note: wordpress options have limited size, if the data you are storing is too big, you may want to consider WP Large Options.

To $(document).ready() or not to $(document).ready() ? that is the question.


If you place your JS/jQuery below the elements, you don’t have to use $(document).ready.


Wrapping all your Javascript in a $(document).ready. All the cool kids are doing it, and it is safe to do so. However, this doesn’t come without drawbacks. Also, there multiple considerations when it comes to loading jQuery itself, but that’s a different conversation.

jQuery SHOULD be loaded on the footer of your <html>. At this point, wrapping or not doesn’t really matter. The magic lies in the fact the by the time your JS runs, the elements are already on the page (aka above the script you are running ).


But let’s say you are forced to load jQuery in the <head> AND you are also forced (or want) to place your JS on <head> as well … now you HAVE to use $(document).ready( function() {} ); because by the time the JS runs, the elements to be selected haven’t been seen by the browser yet.



Code included inside $( document ).ready() will only run once the page Document Object Model (DOM) is ready for JavaScript code to execute. Code included inside $( window ).load(function() { ... }) will run once the entire page (images or iframes), not just the DOM, is ready.

Here’s where the issue lies. When you use $(document).ready() you need to wait for the entire page to load! So, if you want to have your JS do some kind of effect, like a sticky sidebar, or transitions, or anything at all, they will not happen instantly. Sometimes, this isn’t an issue, but the more images and external script your page has, the longer it takes for your js to run. This can be unacceptable, as most of the times, this causes your effect to start working at weird times, making the site … well … act weird.

The solution: instead of wrapping in .ready(), place the <script> just below the elements you want it to work on. Again, by the time your script runs, the elements have already been “seen” by the browser, and the effects will kick in even though the page hasn’t completely loaded yet. No need to wait.


Foundation 6 + Block Grid + WordPress Gallery

If you are using foundation i’m sure you love block grids, but wordpress galleries do their own HTML. No worries, throw this snippet in your functions.php (or wherever it belongs in your theme’s structure) and the markup will change to a blockgrid instead. This works with Foundation 6 … for Foundation 5 you’ll need to change the output to UL and LIs instead of row/column DIVs.

Easy. You are welcome!


Killer steak sauce (pseudo chimichurri)

Every steak can use this killer recipe! which in turn, it is not a true recipe, but more of a combination of flavors that you can adjust to your own taste. It goes especially well with flank steak or skirt steak. Chicken, pork, and fish also love this sauce.

  • Parsley or Cilantro (aka Coriander) or both. Lots of it!
  • Garlic. 2 or 3 cloves.
  • Cumin powder (or seeds). About half a teaspoon.
  • Red Pepper flakes.
  • Olive oil. A lot (think pesto)
  • Salt and Pepper
  • Optional: white vinegar

Warm up your olive oil and put the pepper flakes and cumin. Just warm up, so the oil gets infused with the spiciness and flavor of the ingredients. DO NOT FRY THEM … just warm it up. Once happy with the flavor, just leave it to cool down on the side. ( Experiment: put garlic on the oil as well, until soft. Cooked garlic tastes sweet )

In the meantime, chop the parsley/cilantro and garlic real small. Mash them together. Add salt and pepper to taste. When the oil is cool (you don’t want to cook the parsley/cilantro) mix it all together. Whisk it. Eat it! It is going to be AWESOME! I promise!


Static classes on WordPress plugins

If you have a wordpress plugin, more than likely you are using add_action and/or add_filter. No problem, until you get this error:

One of the annoyances of wordpress is that tracking bugs when in actions or filters can be a bit of a hassle, but you don’t know where the error actually originated, but with a bit of grep-ing of searching you can find the issue. With the error above however, what’s going on is not so evident. Consider the following plugin that does nothing:

Nothing seems wrong here, however, it throws the warning above. The right way of doing this is:

The key being __CLASS__ instead of self on the add_action line. When using static classes add_action and add_filter don’t like self however, if your class is not static, therefore, instantiated you can use add_action( $this, 'function' )  no problem. Seems odd, but that’s how it is.