Author: federico_jacobi

How to Write a Game in Under 13 Kb While Taking Care of a Baby – REMIX

If you don’t want to read the “intro” and go directly to the nuts and bolts, that’s ok.

First of all, yes, this title is 100% stolen from this awesome post by Jaime Gonzalez. I read his post a couple of times in total awe of his 13 kbyte creation, and it inspired me to join the JS13K game jam. My intention isn’t to suck the SEO juice from his post but, even though I don’t know Jaime, I could feel his pain … I am also a dad to a 8-month old little guy that means the world to me. Taking care of a baby is fascinating, challenging, and above all, it is exhausting! The one bit of energy left after he goes to sleep must used for work … and then I had to squeeze a last tiny bit for the jam!

Needless to say, it was tough, but I made it! So, thank you for inspiring me, dear stranger of the net.

I had already participated in another JS gamejam (here’s my entry), but I thought the “open ended” nature (any engine, any size, any team size, any anything) of a traditional jam wasn’t really my thing. As a lone ranger, it is rather difficult to produce quality and polish in JS compared to teams using Unity for example. Don’t get me wrong here though, there are some awesome games produced by a single developer, but I much rather do the heavy restriction of the 13Kb game (that’s what js13k is about by the way).

On to my game: Angry Temujin.

Genghis Khan
Genghis Khan

See it live here: https://js13kgames.com/entries/angry-temujin

The theme for this year’s jam was the thirteenth century, which immediately brought me to Genghis Khan. I’ve always been in awe of this man. The mongol empire was one of the biggest, if not the biggest, empire the world has ever seen. Interestingly, but not surprisingly, his tactics were brutal but effective. In the siege of Nishapur the whole city was destroyed, its citizens murdered, even cats and dogs were killed so there would be no trace of it.

Not a nice guy. Another interesting tactic was to just marry (it is more complicated than that). And there was a whole lot of that: https://www.iflscience.com/fact-check-are-one-in-200-people-descended-from-genghis-khan-65357.

So, yeah, this human was special. But he started alone. He was born Temujin (Genghis Khan, is a title, not a name) and, in fact, when his father died from poisoning, his clan left him, and his family, out for the wolves. Needless to say, he made a comeback.

I wanted to capture that idea … start as one and build an army as you go. At the same time, I was totally and utterly captured by awesome simplicity of Vampire Survivors. What if I could create a weapon system like VS but instead of killing enemies would transform them into troops?

I went along with that.

The rules of engagement

  • I wanted to do this using ECS (a new thing for me).
  • I wanted to use Behavior Trees for controlling friendly units.
  • I wanted to use esbuild (a new thing for me as well).
  • I did not want to do weird JS tricks, bad variable naming, unreadable code, etc … code golfing.
  • I wanted to write solid, readable code, maybe even document it.

Code Golfing

Just … no … Don’t do it.

I can see why people do this. But there is a whole lot of value in understanding what you are actually writing! Clever lines of code make for horrible debugging and completely break the flow of reading the code once you are past the “optimization”. When you are fully done with the game, and are now extremely close to the 13k mark. Sure, try to golf a few things, but the gains from this are just not worth it. I rather you spend the time making the rest of the code better that optimizing one thing. Keep in mind, the code will be minified and mangled, so most of it will be a whole lot smaller anyway.

If you MUST use code golfing, do it … but I didn’t, and wouldn’t.

ECS

Ah! The holy grail of AAA game development. Entity Component System is a type of architecture where your game is split into 3 different components: entities (basically an id), components (the data), and systems. Systems do their work in components and store data in them. Entities are just a way to relate things. This allows amongst many things to code something once and apply it to different things. For example, I have a keyboard control system, which works on entities that have the keyboard control component. In my case, the player entity has a keyboard control component and is how it moves. But I can just as easily add it to an enemy, or a tree, or a bullet, or any entity!

But, in javascript? well yes! There are a couple of libraries around, most notably ECSY. It is fine, but the footprint is too big. There is also the excellent https://github.com/kutuluk/js13k-ecs but I didn’t like that selection was additive only, or at least that’s what it looked like to me at the time. So I just wrote my own.

One of the reasons ECS is the holy grail for game development is performance, but I feel that this critical piece is almost gone in JS. The magic lies in avoiding CPU cache missed, but in JS we don’t have access to memory allocation. You cannot guarantee where your data will be in memory. I guess the closest thing would be to create an array of elements and pre-fill it with data, but I hate this approach. I am not a down to the bits JS guy, so this is just my opinion. Please correct me if I’m wrong.

My implementation is object-based and rather naive, but it works! here is what an entity would look like:

class Entity {
  addComponent( key, component ) {}
  removeComponent( key ) {}
}

In hindsight, this was not great … why should a component have a key? but we keep going. This is what a component looks like:

const BodyComponent = function( width, height ) {
	this.width = width
	this.height = height
};

export default BodyComponent;

To add body to an entity we would:

const exampleEntity = new Entity().addComponent( 'body', new BodyComponent( 16, 16 ) );

There is an entity manager and a component manager that handle creation and querying of data. So a system would do:

class MyPhysicsSystem extends System {
  update( delta ) {
    this.componentManager.query( e => e.components.has( 'body' ) ).forEach( entity => {
      let body = entity.components.get( 'body' );
      // do something with the body component
    } );
  }
}

The whole thing works, and thanks to modern computing, it works fast, but I took a bunch of shortcuts that I’m not too proud of:

  • Entities as well as Components MUST be pooled so the garbage collection doesn’t kill you, I didn’t.
  • Queries are run on every iteration even if the results haven’t changed. This is highly inefficient, there should be some kind of query caching or pre-work done so we don’t have to iterate every time. That said, this was querying 75.000 entities at 60 fps. Not incredible, but more than enough for my 13k game. At most I had 30 thousand entities …. so, not a problem.
  • I already said it, but adding keys to components is just silly.

It is small. It worked. That’s all I needed.

I cannot stress this enough: calling ECS the holy grail, doesn’t mean there aren’t other ways … in fact, I think object oriented would have been better in this case! but I wanted to give ECS a shot. It is a completely different mindset though. It was strange. It was complicated.

On the bright side, having systems completely isolated from everything else was really nice! each system does one thing, and because of that, they are super simple to debug.

Behavior Trees

Making NPCs act interestingly can be done in a couple of ways. The more common approach is FSMs (Finite State Machine), for simple behaviors, and for more complex ones Behavior Trees (right more here). There are other ways of course, but I find these two easy to understand.

At first I was going to use BehaviorTree.js which I had used before. I had done some simple trees and was cruising on the testing. When I built the project I realized that while great, it was just doing too much, and the exports were not being eliminated by tree shaking. In other words, footprint was too big, can’t fit on 13kb. So I ended up rewriting the whole thing using a similar API. Even though I didn’t have enough time to actually make the units have interesting behaviors I am very proud of this particular btree implementation. It is short and sweet, and works like a charm. Exactly what you need for this jam. I will probably release this as a stand-alone. The whole thing can be found here.

One thing that I should mention, and in the spirit of being self-criticizing, the implementation of the Cooldown decorator is rather lazy:

export class CooldownDecorator extends Decorator {
	constructor( args ) {
		super( args );
		this.cooldown = args.cooldown;
		this.timeout = false;
	}

	step( blackboard ) {
		this.status = FAILURE;
		if ( ! this.timeout ) {
			this.timeout = setTimeout( () => {
				this.status = this.child.step( blackboard );
				this.timeout = false;
			}, this.cooldown );
		}
		return this.status;
	}
}

Notice the use of setTimeout … it is just not good. Imagine you throw a fireball, now the cooldown is on, now you pause the game and just wait. When you resume, the cooldown would have expired! Not great. The right way would be to pass or check the game’s timer/delta and compare your current time versus the cooldown requested.

The build tool

This was one of the simplest parts of the project, but also one I truly enjoyed making.

I wanted to give esbuild a try. If you have ever been annoyed by slow build times with webpack or other bundlers … you’ll be in for a treat with esbuild. It is SO FAST!

The process is actually quite simple: esbuild bundling the code and assets, the result is pretty compact already, but I ran it through UglifyJS and was able to squeeze in a few more bytes. Then, comes the big guns: roadroller. That thing is pure compression magic.

With this setup, my zip ended up being 10k (more on that later).

What went wrong?

Even though I am pretty happy with the result, and I surely learned a ton, there is a lot of room for improvement:

  • Time … this was easily the hardest part for me. I ran out of time and didn’t complete everything I wanted. Part of the problem was the custom engine. This is a common pitfall if you are a coder, I knew it and still fell for it. The game is what matters, not the engine behind it.
  • I mentioned it before, but ECS could be “bettter”, more optimized, less shortcuts.
  • I wanted to create more complex behaviors for npcs … In fact, I wanted bosses and scripted enemy generation.
  • I created a very simple Tween component, but I couldn’t make it work to give the player easeIn and easeOut on the movement.
  • Sound effects and music are two big misses here. I wanted to add a synthesized mongolian drum with some patterns using zzfx and zzfxm. But I ran out of time and space.
  • When I was “done” with the code, I went for the zip, and no matter what I did, I couldn’t go below 15kb … not good enough. After spending 2 days trying to figure this out, I realized I wasn’t using the right PNG for the sprites. I had 2 versions, an optimized one and a “dev” one where i was adjusting colors, positions, etc. As soon as I chose the right image the bundled zip dropped to 10kb. Furthermore, after the jam ended, I discovered tinyPNG which cut the size of it by 50% which made the zip 8.5kb … sigh …
  • Last but not least, I would have wanted this to run on webGL. Currently it is only a canvas renderer. I’m pretty sure if I were to upgrade the ECS architecture with the fixes I mentioned above, the bottleneck will certainly be the slowness of the canvas element.

Needless to say, now that I know I had another 5kb of availability … so much could have been done.

On the bright side, a decent game with good code and non-cryptic variables can work under 13k.

  • 14 systems
  • 20 components
  • 25k or so entities
  • 60 fps
  • Behavior Trees

I’m happy. Could be better. But I’m happy.

Work From Home – A Guide for the uninitiated

While many people think working from home is the best thing in the world, the recent Coronavirus apocalypse will teach them otherwise. WFH has many benefits but it also has plenty of downsides that may not seem apparent at first. Here is a quick guide on how to do it right.

A couple of things to keep in mind:

  1. I have been working from home EXCLUSIVELY for the last 5-6 years.
  2. This is not some bullshit guide created by your HR department to keep you “happy”. This is straight from the trenches.
  3. This is not some BS slideshow created to make money via page views.
  4. Follow at your own risk! we are all different, have different needs and space to work with. Though some of these suggestions won’t apply to you, most will … you just don’t know it yet (maybe).

With that in mind, let’s go!

The hardest parts of working from home are distractions and the social disconnect. The former you will feel on day 1 and once you get used to the rules it is no longer a problem. Depending on the type of person you are, the latter will kick in a couple of days or weeks or months later, and it only gets worse overtime. We are social animals, we need other people. Personally, I can do without people just fine, but being around them makes my day better … I don’t have to interact with them, but the feeling of not being alone is very important.

Get business ready

You wouldn’t usually go to the office in the same attire you woke up in, would you? then don’t! Before you jump on your laptop, make sure you get changed. Take a shower, wash your hands and face (minimum 20 secs according to the CDC), brush your teeth … get ready for work! You don’t have to wear a suit or go all out. Just change your clothes and your energy level. You are going to work, energize yourself!

Personally, my “uniform” is t-shirt and shorts, but I MUST wear socks and shoes otherwise my efficiency drops to the floor and I just don’t feel like working. If a suit gets you going, wear it! everybody is different … just don’t roll out of bed and start half-assing your job. You’ll think it works, but your manager will very quickly notice you are slacking.

You are at work! Act like it

Because you are at home, you will be tempted to be lollygagging and bs-ing around your place. You are at work! Just because you now have the opportunity to do laundry and dishes and cleaning and polishing and organizing doesn’t mean you have the time to do so. Opportunity and time are not the same thing. Yes, cleaning is important and so is laundry, but do not be tempted. During work hours work is a priority! If you want to do some chores, that’s ok, but make sure you time them appropriately.

If you start the laundry, then fold the clothes that were there, then noticed the dirty floor in the kitchen, then you talk on the phone for 15 minutes, then the laundry is ready, then you load the next batch, etc, etc … soon enough when you are “ready” to start, it is 2:30 in the afternoon … Good job! (sarcasm for: you wasted the whole day)

Don’t work from your couch

The first day you’ll work from any place. After 3 days not caring where you sit, your back will remind you there is a reason why office chairs were invented. Your kitchen counter or your couch make for terrible offices. Use a table, with a regular chair. Yes they will suck too but they are way better than your bed/couch … your back and neck will thank you greatly.

Very carefully take breaks

Just like in the office, you must take breaks. Every so often get up and take a walk, look outside, breathe! At home your coffee machine is exactly-ish 3 steps away from your office space, and unlike the office, there is a zero percent chance of running into one of your co-workers. As annoying as they may be, they provide a 5 minute break, whether you like it or not. At home this is never the case, and while great at first, this allows your stressed-self to be even more stressed.

Similar to “You are at work” do not be tempted to use your break to start doing chores, they will distract you and wreak havoc on your productivity.

Work from somewhere else

Obviously, this doesn’t apply in the Corona apocalypse, but in “normal” times, try to work from a coffee shop or bookstore. Maybe not all day, but just the morning or afternoon. While this is a recipe for distractions, it can also be a great relief. Being home all day can be a pain (not at first, but trust me, it can!).

Set business hours

You will find this in every single list of WFH etiquette. I realize that many of you work considerably longer hours than the now extinct eight hour shift, but if you don’t keep an eye on this, while you usually work 2 extra hours a day, at home you will do 4 or 6 hours more! You will be exhausted after a couple of days and will hate your home.

“Don’t bring work home” is a common one-liner, but when work is home and home is work, the one-liner doesn’t apply and is stupid. Do everything you can to be done by 5 o’clock (or whatever time you choose). If you go a little over, that’s ok. If you over do it, hate is sure to follow.

Use a to-do list

If distractions are a real problem for you, start the day with a to-do list. Even better, prepare the to-do list the day before! It is easier to focus and block external agents when you know what you need to do for the day. You can probably finish those things earlier that way, and then jump into those chores that you’ve been itching to do for days.

WFH = W

You shouldn’t see working from home as doing less, you are still working! You are doing the same things (maybe in a different way). It is easy to get that feeling of worthlessness when you are home because it doesn’t feel so busy. Do not fall for this trap. The clock doesn’t run any faster or slower, time your meetings, don’t take more meetings or tasks than you usually would. You still need time to do whatever tasks you have to do. If you feel that you can do more, reach out to your supervisor/manager/boss/partner but do so only after you know for a fact you have time.

Don’t work and hangout in the same area

Depending on home big your home is, try to separate the office space from the living space. If you have very little space, just organize your stuff and pick it up when you are done! If you work and hang in the same area you are violating the “You are at work” rule. Also, it will feel like you are always working and that you live in the office. By the same token, the “office” will never be quite serious because your home is always in the middle of work.

Virtual meeting manners

Ah! Virtual Meetings, the bread and butter of WFH. If not handled properly, they can be even more useless than your usual useless office meeting. Here’s some guidelines:

  • Don’t talk over other people: in a regular meeting this is just rude. In a virtual meeting it is catastrophic because all the sound comes from the same place (the speakers) and when 2 (or more) people speak at the same time nobody can hear anybody, yourself included.
  • Mute your microphone: if you are not speaking, just mute yourself. The noise from your house is as bad as the point above, and it is really annoying. Nobody cares what your sink sounds like, or your phone, or spouse, or the street, or your neighbor.
  • Be on time: seems obvious, but unlike regular useless office meetings, with an online meeting the first thought in the attendee’s head is “Am I in the right place?”, followed by “Was this canceled?” and concluded by “I’m outta here!”.
  • Be tech-prepared for the meeting: adjust your mic volume, know where the mute button is (see above), test your video, know where the chat box is, etc. Some meeting services provide a phone number you can use for audio, and in MANY cases it is better than computer audio.

Turn the camera on

Like I said before, we are social beings. Seeing each other is great! I know you don’t want people to see you, but there is something special about seeing people. It connects us. Also when you have your camera on be professional (see “Get business ready“, “You are at work!” and “Don’t work from your couch” ).

Cook your meals

This is a great way to take a break, and it is far healthier than eating all kinds of crap (you know what I’m talking about) … “But I don’t have time”, yeah you do, you are home! This is one of the benefits of WFH.

Workout!

One of the things really awful things about working from home is the lack of movement! It is awesome not having to commute, but taking less than 800 steps in a day will make you feel completely useless and like trash. It is imperative you workout, or take a walk, or jog, or run. The human body is designed to move … so move!

Bonus: home gives you access to all kinds of food, sweets and treats! you will be hungrier than usual and the cravings will be all over the place. Be careful or gain 200 pounds, your call. This is also a big reason why “Workout!” is important.

Good luck!

How to optimize video ad timeouts

TLDR;

I’m going to assume you use DFP because it is by far the most common, but the same concepts apply to all ad servers. Additionally, I’m assuming you are using the IMA SDK which is also the defacto way of doing ads with a few exceptions.

If your site uses videos but you are not monetizing them, you are missing out on a huge opportunity. Video advertising is more profitable than display ads (read banners) and modern video players offer advertising as part of their features (most of them through their paid version). If you want a free version that kicks ass you are welcome to try this beauty [Video Block with Ads] (yes, I wrote it #shamelessplug).

Once you start using ads, everything feels great and your ads are firing just dandy. But there is a dirty monster lurking behind the scenes: VAST errors (*).

Most of the times, when there is a VAST error you lost money. Plain and simple. The same applies to banner ads, they just happen to load far more consistently. Not all errors can be fixed or are timeout related, but, assuming your setup is correct, a lot of them will be (301, 402 timeouts). The issue is that the default timeouts are just not that great. Adjusting your timeouts to help these issues could mean a 10% to 20% revenue improvement!

All players (most) have these timeouts available but may be named slightly different:

  • VAST Load Timeout (usual default is 5 secs): How long to wait for the server to respond to a request directly after it is been made. This timeout is linked to IMA SDK (google.ima.AdsRequest).
  • Load Video Timeout (usual default 8secs): After receiving the request above, how long does the player wait for the video ad to start playing. Video Ads are regular videos just like yours, they just happen to have a link. Therefore, the same old-school rules apply like “is this ad too heavy?”. This timeout is particularly important. It is linked directly into the IMA SDK. (google.ima.AdsRenderingSettings)
  • Max Timeout (defaults are all over the place here): This is the time the player allows for the steps above combined and it is the ultimate cutoff. Think of it this way: once this timeout has passed, the advertising mechanics will be skipped altogether and your video will play. Keep in mind, that at this point the ad request has gone through but the ad will not play.

The key with timeouts is balance: You can wait forever on the ads and sacrifice UX to maximize impressions and minimize errors OR you can wait very little to maximize your “video plays” at the cost of impressions. Additionally, you should consider your visitors. To maximize impressions, if they are mostly on mobile devices, timeouts should be higher than if they were on a desktop machine, but mobile users are also less patient.

With that in mind, the first thing you need to ask yourself (and/or your team) is “What is more important revenue or UX?” … Inevitably, the answer is, of course, both. That is not a good answer. Even though you want both, one is always more important than the other. If you really can’t make your mind, the answer is revenue. The issue being nobody wants to say “the hell with UX, we want the money”.

How to

  1. Establish an acceptable error rate (Errors / Page views %). Be realistic! it is impossible to have a 0% error rate. But what is normal you ask? good luck with that too. DFP will tell you one thing, a seasoned adops person will tell you another and some article online will tell you something else. I would say it depends on your setup, you visitors’ tech and you overall goal. Around 20% is decent but can be improved. More than 30% something is up. Less than 10% … teach me sensei.
  2. Find a good sample size (how to): How many page views is enough to conduct experiments. Another way to put this is how long will the experiment run. Very high traffic sites can get good numbers after only a couple of hours, lower traffic sites may want to run for a couple of days. Too short/few = bad conclusions, too long/many = wasted time.
  3. Establish a ceiling. Set all the timeouts to something ridiculously high: 30 / 30 / 60 (vast load timeout / loadVideo timeout / Max timeout) notice the 60s is 30+30 … as in 20 / 20 / 40. Run this for the designated time (#2). This step will be AWFUL for UX in some cases because no video will play for a potential 60 seconds, but will show you the very best you can possibly get impression-wise.
  4. Establish a floor. This is the opposite of the above. Set timeouts to something extremely low 5/5/5. Only people with amazing devices and great internet speed will get ads. Your error rate will shoot way up and your plays will do so too. In an ideal world, ads would be instant which is what we strive for. Again, run this for the same amount of time or page views you did on #3.
  5. Time to move the experiment. The ceiling (#3) will show you how bad or good the defaults are. Now I would move to a middle ground (15/15/15) between ceiling and floor to see how that compares to your acceptable error rate(#1). Run for the designated time(#2), remeasure and readjust until you are happy. Yes, it is slow and stupid, but it works.

It is imperative you compare using percentages (See #1). Absolute numbers are deceiving and will make you think you are doing great when you aren’t.

Tip: On step #5 adjust the timeouts drastically. Tiny little changes will only confuse you because they don’t affect percentages enough. Let’s say you tried 15/15/15 and your error rate ended up being 18%, then you tried 13/13/13 and the error rate is 17.9% or 18.1% is it really better or worse? such a small change could be a consequence of traffic variance or video inventory, or god knows what … so many things. But if the second time around you try 8/8/8 and the error rate is 22% you know you’ve gone down too much, but if it is 17.5% you should probably keep the lower timeout. I would be happy at that point, but it is all about the acceptable number you figured out on step #1.

Keep in mind, crazy high timeouts don’t mean that you will wait crazy long! If the timeouts are 30/30/30 but your connection is good, you will only wait for a couple of seconds for the ad. It is only the edge cases, that have crappy everything, you are trying to figure out. The lower end will affect everybody! make sure you don’t go too low, you are just trying to establish what is the best you can do.

Last but not least: In my experience, the VAST Load Timeout (the first one in the post) is hardly ever a problem. Because this is about your ad server, it is their business to send a response as fast as possible, and they all do. DFP usually responds in milliseconds. Keep this timeout low; 5 seconds is more than enough. The third timeout (Max Timeout) should be only as long as the second (loadVideo timeout) because the first timeout is mostly ignored and the second one is long enough. This seems to work nicely for me 5 / 22 / 20 but all sites are different.

What did you end up doing?

Adding pageviews to WordPress stats on galleries

Jetpack is awesome. Among many features, by default, it runs WordPress stats. While they are not Google Analytics by any means, they do give you a nice view on how your site is doing, with a very small footprint. The event code is just added to your site’s footer and voila! it just works!

The problem is your site’s footer doesn’t know anything. Slideshows, galleries, compilations and one-page apps are good examples of cases where the content of the page changes via Javascript with a click, or perhaps a time-based approach. As far as I know, wp stats doesn’t have a solution for this issue. I, however, have a hack ready to go, that works wonders.

NOTE: This code will not work on your site as is! but it will hopefully give you the general idea on how to do it.

Let’s make 2 assumptions: one, you have jQuery available (not necessary), and two, the page we have is a picture gallery and fires an event “pageChange” every time NEXT or PREV are clicked.

The part that does the pageview of course is:

This code was added on July 24th. Notice the traffic increase after July 25th.

One of the sites I handle has a lot of ajax type slideshows. Traffic itself didn’t increase, it just wasn’t being recorded on wpstats (see chart above)!

Boom! easy! The same concept applies to one-page applications. Make sure to pass the right urls and post ids to WordPress Stats.

Run javascript that depends on multiple remote assets

Async JS is the way of the present (and future). We MUST load all of our scripts async. They are getting heavy and slow, and the browser needs help, specially on mobile devices. Why wouldn’t you want your page to load faster (see this post).

As web pages grow more and more complex, the idea of a single script that does everything is Jurassic. But what if we want to run code that depends on multiple scripts at the same time? If your app/site is complex you should look into requireJS. If you need something less daunting … I got you (with jQuery):

Neat right? … Let’s kick it up a notch. Say we need the script AND some external data file:

And now the cherry on top … we want to load myData.json or yourData.json or just nothing conditionally:

 

I’m sure you can do something similar on vanilla JS … but if you need something like this, chances are you are running jQuery.

How to make SYNC javascript assets work ASYNC

Speed is everything. It always is … there is no such thing as “the page loaded too fast”. To make matters worse, today we have Google Pagespeed Insights to make our lives miserable (a whole different topic). They usually recommend loading your JS asynchronous (async) or on the footer.

Why?

Javascript is render blocking, which means the browser can’t work on displaying the page while the script is running. So, consider this:

If myscript.js takes 1 minute to load, the whole page will be delayed 1 minute, no matter how simple it is. Not cool. To fix this, we just add async to the script tag. This makes the browser start the load of the asset (myscript.js) but continue parsing html. The problem now is that because the asset is still loading, and will arrive in 1 minute, the function changeText() is not available by the time the browser gets to it. Your code will not run, but the page has loaded, which is good news.

 

How?

You can implement a queue or a callback*. Both ways work just as nicely, but they have different use cases. (* you can use Promises but that’s kind of a callback, just real fancy)

Queues work better if you need to do many calls to the same script in different parts of the page but all depend on the previous call one way or another. Here you go (Notice the change on the js file):

Tiny change … works like a charm … once myscript.js loads 1 minute later, it’ll execute all the functions pushed to the queue.

Potential pitfall 
Lets say instead of the script loading slow, it loads super fast. If you are working on the DOM, make sure the elements you want, are in place by the time the queue executes. In the example this is mitigated by adding to the queue after the dom element ( <div> ).

Implementing a callback is a bit easier, but works better if you only need to run code once. Notice, no changes are needed on myscript.js from the original version.

You can make a hybrid of these 2 approaches where the callback is the code to process the queue … but it feels a bit ghetto to me. I’m sure there is some good use case where this would be desirable. The same pitfall as above applies, but as long as you load the script after the dom element you’ll be ok.

As always there is a jQuery way to do this. And it is in fact quite nice, what I don’t like about it is having to load jQuery synchronous. That being said, you can probably turn all your jquery into a queue and now “we all happy“.

 

When caching is not enough: “Double Buffered” Remote Calls

One of the challenges of running WordPress at scale is dealing with API calls to (insert_external_service_here). Using wp_remote_get (or curl) is probably your go-to method for API calling and this is a fine function for a low traffic site. On a site that gets millions of pageviews, it is just not going to cut it. You will inevitably run into race conditions.

In case you don’t know, a race condition is when person1 is waiting on the server to finish the api call, then person2 makes another call, then person3 … then person X, but person1 is still waiting. If the API server is being slow, there could be a queue of thousands waiting, at that point, your server has crashed for sure. wikipedia

Another reason for not using wp_remote_get on every request is API limiting. Some services do not allow more than X calls per second/minute/day. If you make a call for every visit, you will surely reach that limit extremely fast!

Simple Solution: Caching.

By caching your call the “traditional” way, you’ve now gone way ahead from where you started. The API will only happen every 5 minutes, and people will not have to wait for the results as you have them stored already! This is just perfect for medium traffic sites and fast response APIs.

The problem with this approach is that at the 5 minute mark you still need to wait for the API to respond. If the response is slow you could run into a race condition again, because cache is invalidated, and it goes like so: person1 triggers cache invalidation (past 5 minutes) and calls API, person2 calls API too because cache is not valid, person3 same ….. person100 same, person1’s call is done and cache is set again for the next 5 minutes, person101 gets a cached result everyone is happy from here on, in the meantime, persons2-100 are still waiting on the response slowly. We have somewhat mitigated the problem, but not completely solved it. If the traffic is really high and the API is really slow, your server could crash.

If you have that kind of traffic, you are playing with the big boys. Lazy caching is not going to be enough.

Complex Solution: Double Caching

Instead of just caching the result, you can double cache it. To do so, we are doing the same thing as above, but twice. And we’ll do it in a way where cache1 lives for 5 minutes and cache2 lives for 10 minutes. When cache1 invalidates, that person makes the api call and sets a switch so everyone else uses cache2. Now only person1 is slow.

This is pretty great as is, but we can do better. Let’s say the API starts to malfunction for whatever reason. In that scenario, you’ll have good data for 5 minutes until the next time you do the call … not so awesome. Add a data consistency check (which you SHOULD have), and now we are in business:

The only caveat with that approach is if your backup cache invalidates. A neat alternative, save the backup in an option:

“Double Buffer”

Boom! Now even if the API fails or nobody accesses the page and cache dies, your site still has content, all you have to do now is find a way to alert you of the problem.

Overkill? yes … Works? … Really well!

Note: wordpress options have limited size, if the data you are storing is too big, you may want to consider WP Large Options.

To $(document).ready() or not to $(document).ready() ? that is the question.

TLDR;

If you place your JS/jQuery below the elements, you don’t have to use $(document).ready.

 

Wrapping all your Javascript in a $(document).ready. All the cool kids are doing it, and it is safe to do so. However, this doesn’t come without drawbacks. Also, there multiple considerations when it comes to loading jQuery itself, but that’s a different conversation.

jQuery SHOULD be loaded on the footer of your <html>. At this point, wrapping or not doesn’t really matter. The magic lies in the fact the by the time your JS runs, the elements are already on the page (aka above the script you are running ).

 

But let’s say you are forced to load jQuery in the <head> AND you are also forced (or want) to place your JS on <head> as well … now you HAVE to use $(document).ready( function() {} ); because by the time the JS runs, the elements to be selected haven’t been seen by the browser yet.

 

From https://learn.jquery.com/using-jquery-core/document-ready/

Code included inside $( document ).ready() will only run once the page Document Object Model (DOM) is ready for JavaScript code to execute. Code included inside $( window ).load(function() { ... }) will run once the entire page (images or iframes), not just the DOM, is ready.

Here’s where the issue lies. When you use $(document).ready() you need to wait for the entire page to load! So, if you want to have your JS do some kind of effect, like a sticky sidebar, or transitions, or anything at all, they will not happen instantly. Sometimes, this isn’t an issue, but the more images and external script your page has, the longer it takes for your js to run. This can be unacceptable, as most of the times, this causes your effect to start working at weird times, making the site … well … act weird.

The solution: instead of wrapping in .ready(), place the <script> just below the elements you want it to work on. Again, by the time your script runs, the elements have already been “seen” by the browser, and the effects will kick in even though the page hasn’t completely loaded yet. No need to wait.

 

Foundation 6 + Block Grid + WordPress Gallery

If you are using foundation i’m sure you love block grids, but wordpress galleries do their own HTML. No worries, throw this snippet in your functions.php (or wherever it belongs in your theme’s structure) and the markup will change to a blockgrid instead. This works with Foundation 6 … for Foundation 5 you’ll need to change the output to UL and LIs instead of row/column DIVs.

Easy. You are welcome!