Category Archives: Browser Gotchas

IE doesn’t have an emulation feature

“I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.”

Document Modes in Internet Explorer are probably one of the most misunderstood features of the browser itself. The cause of the confusion doesn’t lie exclusively on the end-user either, Microsoft has shipped a Document Mode switcher in the F12 Developer Tools under an Emulation tab for a while now. As a result, it’s not uncommon for people to think that Internet Explorer 11 can emulate Internet Explorer 8 for the purposes of testing simply by switching the Document Mode.

“I have a very bad feeling about this.”

So what the heck is this feature? Why does it exist, and what should we call it if not “emulation”? I recently discussed this on twitter with a friend, and my frustration over the matter drove me to really sit and think about a proper way to illustrate the purpose of this feature. After some thought, I arrived at a comparison with an A99 Aquata Breather. You know what that is, right? No? Let me explain.

The A99 Aquata Breather (seen in Star Wars, episodes I and III) allowed Jedi to survive for up to 2 hours in an otherwise hositle environment. For instance, on their long-swim to the underwater city on Naboo Qui-Gon Jinn and Obi-Wan Kenobi used Aquata Breathers to stay alive long enough to enter the Hydrostatic Bubbles of Otoh Gunga (I really hope this is making sense…).

Lets bring this back to Internet Explorer and Document Modes. Just like Qui-Gon and Obi-Wan weren’t able to breathe under water (unassisted, that is), a site constructed in/for Internet Explorer 8 may not necessarily work in Internet Explorer 11 unassisted. So to ensure a more enjoyable experience, and avoid certain death, Jedi swim with an assisted-breathing device, and web developers send x-ua-compatible headers with their HTTP responses (or a meta-tag, if that’s your thing).

“That’s no moon. It’s a space station!”

So that Document Mode switcher in Internet Explorer’s F12 Developer Tools is not for seeing how well your modern site will work in a legacy browser, but rather how your legacy site will work in a modern browser.

Suppose a developer built an intranet application for her client in late 2009, when the client was running Internet Explorer 8 internally on a thousand machines. Several years later, the client calls the developer and informs her that they are planning on moving to Windows 8.1, and Internet Explorer 11. At this point, the developer can open Internet Explorer, navigate to the application, and see how well it will work for her client in a modern version of IE. Should she encounter differences in layout or scripting, she could switch the Document Mode to that of Internet Explorer 8 and see if this resolves the problem. (See also Enterprise Mode for IE)

Note the directionality — Document Modes are for making older sites work in modern browsers by creating (to some degree) environmental configurations the site relies on. This doesn’t turn the browser into an older version of itself; it merely satisfies a few expectations for the sake of the application.

“This is some rescue! You came in here, but didn’t you have a plan for getting out?”

Like the A99 Aquata Breather, a Jedi developer should only be expected to rely on a legacy document mode for a short period of time. The x-ua-compatible approach should be seen as a splint to keep your broken leg together until you can get some serious medical attention for it. It’s meant to keep you upright with the intent that you should be able to soon stand alone, unassisted. Don’t use a breather unless you’re swimming to a place that has air to breathe, and don’t use x-ua-comptabile headers or meta-tags without intending to fix whatever breaks your site in a modern browser.

If you develop in a modern version of Internet Explorer and do need to see how your site will render in an older version of IE, your best option is to download a free virtual machine from modern.ie, get an account on browserstack.com, or do your testing the next time you visit Grandma’s for dinner.

Simple Throttle Function

When you’re handling events like window resize, keystrokes, mouse movement, and scrolling you need to be careful how many times you call heavy-handed functions. For instance, if you’re leveraging JavaScript to create responsive UIs in legacy browsers you probably don’t want to run complicated DOM-altering handlers 10 times a second when far fewer runs would be sufficient. Enter throttling.

Throttling is a means by which we can limit the number of times a function can be called in a given period. For instance, we may have a method that should be called no more than 5 times per second.

function callback () {
    console.count("Throttled");
}

window.addEventListener("resize", throttle( callback, 200 ));

In the above, our throttle function takes a callback, and a rate limiter. It will fire the callback at most once every 200 milliseconds, or at most 5 times per second. So what does the actual implementation of the throttle function look like?

function throttle (callback, limit) {
    var wait = false;                 // Initially, we're not waiting
    return function () {              // We return a throttled function
        if (!wait) {                  // If we're not waiting
            callback.call();          // Execute users function
            wait = true;              // Prevent future invocations
            setTimeout(function () {  // After a period of time
                wait = false;         // And allow future invocations
            }, limit);
        }
    }
}

The above is fairly straight forward, but let’s discuss it briefly anyway.

The overall solution is to replace the user’s function with our own function. This allows us to add extra logic to the mix, such as how frequently the function can be invoked. This is why our throttle function inevitably returns a new function.

We manage our state with a variable called wait. When wait is true, our door is closed. When wait is false, our door is open. So by switching this value from true to false, we can allow the user’s function to be invoked. And by switching wait from false to true, we prevent any calls to the user function.

Anytime we switch this, we setup a timeout that will toggle it back to false after a period of time. This is where the 200 milliseconds comes in – we’re allowing the user to determine the length of time our door should be shut.

You should also use throttling for function that get called dozens, or hundreds of times. It’s rare that you actually want things to be called that often. The implementation in this post is fairly simple (I was shooting for conciseness), but more complicated solutions exist such as those in lo-dash, underscore, and more.

I created a fiddle with a throttled and and non-throttled callback. Tying this to the window resize event, I adjusted my window width/height briefly and let the number of calls speak for themselves:

Throttled results vs Non-throttled results

Throttled results vs Non-throttled results

Emulation Accuracy in Internet Explorer

Early preview versions of Internet Explorer 11 lacked Emulation features that we saw in Internet Explorer 9 and 10. When Windows 8.1 shipped, Internet Explorer 11 was found to have the Emulation features shipped with it. These tools are helpful, though they can be misleading at times.

My general workflow in the past when using these tools goes like this: A user reports an issue in Internet Explorer x. I instruct the latest version of Internet Explorer to emulate the reported version. If I encounter the same issue, I can debug using the modern version of Internet Explorer. If emulation does not reveal the same issue, I need to load up a virtual machine, or use a service like BrowserStack.

The problem with these tools is that it’s not entirely clear where the line of reliability resides. To what degree this emulation replicates the native experience is unknown (to me, at least). Due to this, I’ve decided to do a deep dive into Emulation and see just how reliable it is, and in which areas.

Computed Styles

The first dive wasinto Computed Styles. Does Internet Explorer generate the same computed styles as IE10 and IE9 when it is emulating those versions? Surprisingly, yes. Granted, I’m running instances of IE10 and IE9 in a Virtual Machine (compliments of modern.ie), so that should be considered. Also other important thing to note is that this pass-through assumes Standards Mode.

The comparison tables are being maintained in a Google Docs spreadsheet. Click the preview below for the full view.

comparison.table

Window Object

My next focus was on cycling over enumerable properties on the window object and laying those out for comparison. A cursory glance of this next table will reveal that Internet Explorer 11 emulates the window object from IE9 and IE10 very well. Granted, there are some very clear examples of where it brought along a couple extra methods and properties.

comparison.table.window

It’s worth noting that this particular table had to be done a couple of times. Some of these members don’t attach themselves to the window object until certain actions are performed. For instance, this table is front-loaded with a bunch of BROWSERTOOLS members that are pushed onto the window object when various portions of the developer tools are opened. Other members, such as $0, don’t exist until you perform an action like selecting an element in the DOM Explorer.

More on GIFs and Painting in Internet Explorer

About a week ago I wrote a post demonstrating the use of UI Responsiveness functionality in Internet Explorer 11 to determine how the browser handles animated GIFs in various states. This post was a fairly well-received so I wanted to expand a bit more upon it and cover a few more scenarios.

For the most recent round of testing, I setup a simple interval to change the className of the body element every few seconds. This, in turn, affects the placement and layout of a single image element within the document.

(function () {

  "use strict";

  var states = ["", "opacity", "visibility", "offscreen", "perpendicular"],
      container  = document.body,
      cycles = 0,
      nextState;

  function advanceState () {
    // Advance to next array index, or return to start
    nextState = states[++cycles % states.length];
    // Indicate a new performance mark in our developer tools
    performance.mark(nextState ? "Starting " + nextState : "Restarting");
    // Update the body class to affect rendering of image
    container.className = nextState;
  }

  setInterval(advanceState, 3000);

}());

I used the performance.mark method to add my own indicators in the performance graphs to help me identify when the demo was transitioning into a new state. These performance marks are represented in Internet Explorer by small upside-down orange triangles.

performance.ticks

Let’s walk through each of these triangles left to right, discussing the state they represent.

GIF Untouched

This state is represented by all activity to the left of the first performance mark. Not much needs to be said – Internet Explorer continued to paint the GIF as it sat animated on the screen.

performance.ticks.0

Setting GIF Opacity

This step is represented by all activity between the first and second performance marks. In this step, the image element has its opacity property set to 0. Even though the image is no longer visible, the browser continued repainting the region occupied by the image element.

performance.ticks.1

Setting GIF Visibility

This step is represented by all activity between the second and third performance marks. In this step, the image element has its visibility property set to hidden. Once the visibility property was set to hidden, the browser made one final repaint (presumably to hide the element) and no further paint events took place during the duration of this state.

Of relevance here is that the hidden attribute on the image itself has the same effect. When this attribute is present on the element, Internet Explorer will cease to repaint that elements occupied region.

performance.ticks.2

Setting GIF Outside of View

This step is represented by all activity between the third and fourth performance marks. In this step, the image element is positioned absolutely at top -100% left -100%. In spite of the fact the element is positioned outside of the viewport itself, the browser continued to run paint cycles.

performance.ticks.3

Setting GIF Orientation Perpendicular to Viewport

This step is represented by all activity between the fourth and fifth performance marks (the fifth mark is the ‘Restarting’ mark). In this step, the image is rotated using the transform property so as to set it at a right angle to the viewport, effectively hiding its content from the viewer. This orientation did not affect the browser paint cycle, and Internet Explorer continued repainting the region occupied by the image element.

performance.ticks.4

Conclusion

As a general rule, it appears Internet Explorer will run paint cycles for every animated GIF in the document, unless that element has its visibility property set to hidden. This is fairly reasonable, since setting visibility to hidden is the only explicit way to tell the browser not to render the element. Keep this in mind when performance is of key importance.

After running through and investigating this further I was curious what the same test would reveal in Chrome. I was pleased to see that Chrome would cease to paint for the opacity, visibility, and offscreen configurations. No performance marks are revealed in Chrome’s developer tools, but you can identify the timer functions by the presence of a small orange mark.

performance.ticks.chrome

Tracking GIF Repaints with UI Responsiveness

I recently came across Paul Lewis’ article Avoiding Unnecessary Paints: Animated GIF Edition on HTML5 Rocks and wanted to contribute a bit to the message. If you aren’t a regular reader or HTML5 Rocks I would definitely encourage you to visit often.

The summary of Lewis’ post is that animated GIFs can cause repaints even when obscured by other elements. So even though you may not see the GIF, the browser may continue to repaint every frame of it.

As far as browsers go, Lewis shared with us the “Show paint rectangles” feature in Chrome. When enabled, this feature will visually outline regions that are being repainted. This obviously makes it very easy to determine whether paints are happening on obscured elements. Chrome, Safari, and Opera all repaint. Firefox does not.

In his discussion of various browsers, Lewis calls Internet Explorer 11 a “black box,” suggesting it gives “no indication whether repaints are happening or not.” As a result, no statement was made speculating whether Internet Explorer was in the camp of Chrome, Safari, and Opera, or Firefox.

I quickly did a scan of the comments to see if anybody addressed these words regarding Internet Explorer, but at the time nobody had. I saw this as a great opportunity to introduce my friends to the new UI Responsiveness tool in Internet Explorer 11.

The UI Responsiveness tool in Internet Explorer’s Developer Tools will actually give us pretty granular information about frame-rates, script timing, styling changes, rendering, and even repaints. Given the novelty of this feature in Internet Explorer, I felt it would be good to provide a quick example of how Internet Explorer 11 can indeed tell you whether obscured GIFs cause repaints or not.

A Single Visible GIF

My first experiment consisted of nothing more than a single GIF in the body of my document. By profiling the page for a few seconds I could see that there was constant repainting. This was expected of course, after all there’s a GIF on my screen spinning.

Below is one second of activity. As you can see, constant repainting.

gif.1

Toggling Visibility of a Single GIF

Next up, toggle the visibility of the GIF by way of its display property. I setup a interval to flip the visibility every second. The results were also as expected. For roughly one second, we had constant repainting. Following this we saw a set style event, followed by a single repaint (to hide the element). At this point, it was silence for a second before the element became visible again, and thus began causing additional repaints.

gif.2

Routinely Obscuring a Single GIF

The last experiment was to add another element, a div with a solid background-color in this case, and obscure the animated GIF on an interval. This was done by positioning both the div and the image absolutely within the body, and giving the div a higher z-index.

As can be seen in the image below, repainting did not cease even when the layout was adjusted every second. This demonstrates that when obscured, animated GIFs will continue to cause repaints in Internet Explorer 11. Whether this will be the case for future versions of Internet Explorer remains to be seen.

gif.3

Conclusion

So as we can see in these examples above, Internet Explorer 11 is capable of sharing vital information about costly repaints. Additionally, IE falls into the Chrome, Safari, and Opera camp. It sure would be nice if all browsers followed Firefox’s lead here and stopped repaints on obscured GIFs, but perhaps there is a good reason they haven’t, who knows.

I hope you can see the enormous value the UI Responsiveness panel in Internet Explorer 11’s Developer Tools brings to the debugging experience and use it to make your projects more responsive and performant in the future. The team behind Internet Explorer really have been doing amazing work lately, and the UI Responsiveness functionality is but one example of this.

UPDATE: More on GIFs and Painting in Internet Explorer

Hacking Windows 8 Snapped Mode

I recently updated four machines of mine to Windows 8.1. Love it, not a thing I would change (talking high-level here). But one of my favorite features continues to give me a slight headache.

In Windows 8 you have the ability to “snap” a web-browser to either side of your screen. This is great if you’re trying to keep documentation up, an in-browser chat visible, or even some productivity tools that are browser-based.

Sadly, many of the sites I snap wind up getting reduced so badly that I can no longer make out what is on them – kinda like how browsing the web is when you’re on a small mobile device, and the site you’re viewing isn’t “responsive” in a cross-browser compat way.

As an example, here I am trying to use wunderlist (awesome service) on my Acer Aspire R7.

Wunderlist in Snapped Mode.

Wunderlist in Snapped Mode.

Ideally I would just contact Wunderlist and have them fix their site for IE in Snapped Mode, but that isn’t always easy. Fortunately, there happens to exist a feature in IE that we can leverage – custom accessibility stylesheets.

In order to find this setting, click the Gear icon (or press alt to reveal your toolbar, and then click “Tools”) > Internet Options > General [tab] > Accessibility [button]. Alternatively, you can press the Gear icon

Internet Explorer on Windows 8 uses the same settings whether you’re in the Desktop mode or the Immersive (“Metro”) mode. As such, all we need to do is create our own custom stylesheet that will be applied to every website we visit. With this, we can set the new viewport size on our own.

Setting custom stylesheet in Internet Explorer.

Setting custom stylesheet in Internet Explorer.

I should note here that after setting (or updating) your stylesheet, you will need to close and re-open Internet Explorer (That’s a feature I’d like to see changed). This was really bugging me since I would make changes, and refresh, only to see no changes at all.

In the screenshot above you can see that I tossed in a basic media query to set the viewport to 320px wide anytime the browser itself was 400px or less in size. The result is immediately seen when you re-open Internet Explorer and pull up Wunderlist.

Wunderlist, with our custom stylesheet in place.

Wunderlist, with our custom stylesheet in place.

The end result is a far better user experience, and something I can now use in parallel to my daily work. I hope this works for you, and you find the Snapped Mode as enjoyable as I do.

You may have to add a bit more to your stylesheet to target certain websites, etc. This basic example was meant to address one site only. A real-world application would be ever-changing to accommodate sites and services you come across in your daily routine.

Hack on, hackers.

Browsers – Broken By Design

I set out a few hours ago to write about a problem I had experienced in Safari, Chrome, Firefox, and Internet Explorer. Opera worked as I expected, which had me convinced the other vendors merely dropped the ball, and shipped partial implementations for the :visited and :link pseudo-classes.

My CSS was very simple:

:link {
    color: white;
    background: red;
}

:link::before {
    content: "Pseudo: ";
}

A color set for unvisited links, followed by some content in the ::before pseudo element on all unvisited links. So when I dropped in two links (one visited, one not) I expected to see the following:

image00 copy

Instead, what I saw was a litter of inconsistent results between Chrome, Safari, Internet Explorer, Firefox, and Opera. The above image came from Opera. It performed exactly as I suspected. The others, not so much.

Chrome and Firefox gave similar results; both set “Pseudo” as a virtual first-child of both anchors (though I only asked to have it on unvisited links), and leaked the background color from :link into :visited.

Internet Explorer did better than Chrome and Firefox; it preserved the “pseudo” text, but left the background of the visited link untouched. I was still very perplexed as to why it was spilling values from :link over into :visited.

You can observe something interesting in Internet Explorer – :visited links start visually identical to :link links, but are transitioned into their :visited state immediately thereafter. Setting a transition on all links reveals the change when the document loads:

a {
    transition: background 10s;
}

You’ll see the :link links remain unchanged but :visited links slowly adopt their end style. This is not the case for Chrome, Firefox, Opera, or Safari.

Safari appeared to be the most broken of them all. It duplicated everything. Further, I attempted to set additional styles to :visited, and it completely ignored them.

Discovering History

I found it incredible that all of these browsers, with the exception of Opera, would get this so wrong. So like any good developer I took to the web to see who else might be complaining about this, which is when I came across a Stack Overflow post suggesting this was somehow related to security concerns.

This offered another search vector; security issues. That was the needed catalyst for opening up a fascinating story about yet another creative attempt by developers to put their nose where it may not belong – in the client browser’s history.

Browsers typically style links with a default color to indicate whether it is a visited link or not. We’ve all seen those blue and purple links before – they’ve become quite at home on the web. Somebody got the (seriously) brilliant idea to create anchors ad hoc, and use getComputedStyle to see if the links represented visited or unvisited locations.

Mozilla reports that you could test more than 200,000 urls every minute with this approach. As such, you could – with a great deal of accuracy – fingerprint the user visiting your site based on the other urls they have visited; and browser history runs deep.

Scorched Earth Solution

The solution implemented by Firefox (and apparently others) was to greatly reduce the presence of visited links. They started by instructing functions like getComputedStyle and querySelectorAll to lie (their words, not mine) about their results. Sure enough, a simple check confirms that though my :visited links have a different font color than :link links, getComputedStyle says they’re the same rgb value.

Mozilla’s Christopher Blizzard continues with a very key point (emphasis added):

You will still be able to visually style visited links, but you’re severely limited in what you can use. We’re limiting the CSS properties that can be used to style visited links to color, background-color, border-*-color, and outline-color and the color parts of the fill and stroke properties. For any other parts of the style for visited links, the style for unvisited links is used instead. In addition, for the list of properties you can change above, you won’t be able to set rgba() or hsla() colors or transparent on them.

And there it was, the explanation that tied all of the seemingly broken pieces together. The browser would instruct :visited to share property values with :link. The browsers aren’t broken; they were designed to fail.

Test Confirmations

I wanted to explore the now-understood implementation a bit further, so I tried a few things. Querying the DOM for :visited links (expecting 1 of 2) was my first decision. I also queried the DOM for :link as well – expecting 2 due to the new security:

Browser Number of :visited Number of :link
Chrome 0 2
Firefox 0 2
Internet Explorer 0 2
Safari 0 2
Opera 1 2

Nearly all of the browsers report no :visited link, with the exception of Opera. All browsers report a total of 2 links when querying :link.

So it seems like you can still get a list of visited sites in Opera. Internet Explorer, Firefox, Chrome, and Safari all prevented the exploit from being carried out. But one thing about Internet Explorer intrigued me; remember the transitioning I spoke of earlier?

Internet Explorer Still Vulnerable?

I noticed that if I set transition: background 10s on anchors in Internet Explorer you would slowly see all visited links slowly tween into their end state. Transitions fire the transitionend event when they complete, so could we still get the user’s visited links in Internet Explorer?

The CSS to test this is very simple:

a { transition: background .01s }
:link { background: red }

And the following JavaScript:

/* Anchor var, fragment, and array of guesses */
var anchor;
var fragment = document.createDocumentFragment();
var websites = [
    'http://bing.com',
    'http://google.com',
    'http://jquery.com',
    'http://appendto.com',
    'http://stackoverflow.com'
];

/* Listen for completed transitions */
document.addEventListener("transitionend", function (event){
    console.log(event.target.href);
}, false);

/* Create a link for each url */
websites.forEach(function(url){
    anchor = document.createElement("a");
    anchor.setAttribute("href", url);
    anchor.innerHTML = url;
    fragment.appendChild( anchor );
});

/* Add our document fragment to our DOM */
document.body.appendChild(fragment);

Immediately upon being added to the DOM our :visited anchors will start to transition away from looking like a :link anchor. Once that transition is complete, we learn what the URL is, and confirm that the user has visited that site.

Closing Thoughts

I was reminded today just how exciting our industry is. People are constantly experimenting, learning new things, and sharing that knowledge. Industry experts, developers, designers, and browser vendors are always working to shift and adjust in light of the ever-evolving web.

Although the CSS2.1 Specification says something, that doesn’t make it so – even if the feature is 15 years old. If the ends justify the means, browser vendors will (and did) fire-bomb their own code-base to protect the end user.

Finally, we’re all in this together; we ought to work together. Microsoft launched the @IEDevChat twitter account not too long ago to assist developer’s who are attempting to follow web-standards. Then even organized a group of developers who wanted to volunteer and help build a better web; you can find (and join) it online at http://useragents.ie.

I’m sure there’s more history here that I’ve yet to discover, but what I have seen is enough to pique my interest. If you have anything else to share, or can contribute to the vulnerability test approach above, please leave a comment below. I may swing back around and check out the other browsers later on.

Taming the Lawless Web with Feature Detection

The Lawless Web

Over the past 15 years or so that I’ve been developing websites, I’ve seen various conventions come and go. Common practices arise, thrive for a while, and die. I, myself, spent a great portion of my past hammering out lines of tabular markup (and teaching classrooms to do the same) when working on my layout.

When I wasn’t building complicated table-based layouts littered with colspans, I was double-wrapping elements to iron out the differences between browser-interpretations of the box-model.

Fortunately, these practices have died, and others like them continue to die every day.

The Web Hasn’t Been Completely Tamed

While atrocities like table-based layouts are quickly becoming a thing of the past, other old habits have chosen not to go as peacefully. One that has managed to stick around for some time is user-agent sniffing; the practice of inferring a browser’s abilities by reading an identification token.

var browser = {
     vendor: navigator.userAgent.match(/Chrome|Mozilla|Opera|MSIE/),
    version: navigator.userAgent.match(/\d+/)
};

if ( browser.vendor == “Mozilla” && browser.version == 5 ) {
    // This matched Chrome, Internet Explorer, and a Playstation!
    // Nailed it.
}

Since shortly after the dawn of the epoch, developers have been scanning the browser’s navigator.userAgent string for bits and pieces of data. They then couple this with a knowledge of which browsers support which features, and in conclusion make judgements against the user’s browser and machine. Sometimes their conclusions are correct, but the ends don’t justify the means.

Don’t Gamble With Bad Practices

Sniffing a browser’s user agent string is a bit like playing Russian Roulette. Sure, you may have avoided the loaded chamber this time, but if you keep playing the game you will inevitably increase your chances of making a rather nasty mistake.

The navigator.userAgent string is not, and has never been, immutable. It changes. It gets modified. As browsers are released, as new versions are distributed, as browsers are installed on different operating systems, as software is installed in parallel to the browser, and as add-ons are installed within the browser, this string gets modified. So what you sniff today, is not what you will sniff tomorrow.

Have a quick look at a handful of examples for Chrome, Firefox, Opera, and Internet Explorer. Bottom line is that you can’t be sure what you’ll get when you approach the user agent string buffet. As is immediately clear, the above approach would not even match version 12 of Opera consistently with its variations.

So, have I thoroughly convinced you that sniffing is bad? And have you sworn it off as long as you shall live? Let me give you something to replace it; something reliable that will nearly never let you down – feature detection.

Go With What’s Reliable

Feature detection is the practice of giving the browser a test, directly related to the feature you need, and seeing whether it passes or fails. For instance, we can see if the browser supports the <video> tag by quizzing it on whether it places the correct properties on a freshly-created instance of the element. If the browser fails to add the .canPlayType method to the element, we can assume support isn’t there, or isn’t sufficient to be relied upon.

This approach doesn’t require us to make any assumptions. We don’t couple poorly-scavenged data with naive assumptions about which browsers support which feature. Rather, we invite the browser out onto our stage, and we demand it do tricks. If it succeeds, we reward it. If it fails, we ditch it and take another approach.

What Does “Reliable” Look Like?

So how exactly do we quiz the browser? Feature-detections come in all shapes and sizes; some very small and concise, while others can be somewhat lengthy and verbose – it all depends on what feature you’re trying to test for. Working off of our example of checking for <video> support, we could do the following:

if ( document.createElement(“video”).canPlayType ) {
    // Your browser supports the video element
} else {
    // Aw, man. Gotta load Flash.
}

This approach creates a brand new video element for testing. We then check for the presence of the .canPlayType method. If this property exists, and points to a function, the expression will be truthy and our test will pass. If the browser doesn’t understand what a video element is, there won’t be a .canPlayType method, and this property will instead be undefined – which is falsy.

The Work Has Been Done For You

You could be rightly freaked out about now, thinking about all of the tests you would have to write. You might want to see if localStorage is supported, gradients, video, audio, or geolocation. Who on Earth has time to write all of those tests? Let alone learn the various technologies intimately enough to do so? This is where the community has saved the day.

Modernizr is an open-source library packed full of feature-detection tests authored by dozens of talented developers, and all bundled up in an easy to use solution that you can drop into any new project. You build your custom download of Modernizr, drop it into your project, and then conditionally deliver the best possible experience to all visitors.

Modernizr exposes a global Modernizr object with various properties. You can test for the support of certain features simply by calling properties on the Modernizr object.

if ( Modernizr.video ) {
    // Browser supports the video element
}

Additionally, it can optionally add classes to your <html> element so your CSS can get in on the action too! This allows you to make conditional adjustments to your document based upon feature support.

html.video {
    /* Woot! Video support. */
}

html.no-video #custom-video-controls {
    /* Bummer. No video support. */
    display: none;
}

Building A Better Web

By using feature-detection, and forever turning your back on user agent sniffing, you create a more reliable web that more people on more devices and browsers can enjoy. No longer are you bound to hundreds upon hundreds of user agent strings, or naive assumptions that will come back and bite you in the end. Feature-detection liberates you so that no babysitting of the browser landscape is necessary.

Code for features and APIs. Don’t code for specific browsers. Don’t get caught writing “Best viewed in WebKit” code – build something for everybody. When you focus on features, rather than ever-changing strings that don’t give you much insight, you help make the Internet better for us all. Do it for the children ;)

You’re Not Alone

When you set out to do things the right way, you will undoubtedly run into obstacles and complications. Let’s face it, quality work often times requires more of an investment than the slop that is so often slung out onto the web. The great news though is that you’re not alone – the web is more open than ever before, full of those willing to help you along the way.

Drop in to Stack Overflow to catch up on the various feature detection questions being asked there. Or if you’re trying to get Chrome and Internet Explorer to play nice with one another, post a question on Twitter using the #IEUserAgents hashtag. This notifies an “agent” to come to your aid.

Just be sure to pay it forward. As you learn, and grow, help others to do so as well. Maybe even write a few feature-tests and contribute them back into Modernizr along the way.

Polyfill for Reversed Attribute on Lists

Lists in HTML typically appear in decimal fashion, starting from 1 and going to n. Sometimes these are instructed to appear in reversed order via the reversed boolean attribute on the ol element.

<ol reversed>
    <li>I am number two.</li>
    <li>And I am number one.</li>
</ol>

Some browsers don’t understand the reversed attribute, and thus will not reverse the list. Although you cannot reverse the entire list in some browsers with this attribute, you can manually override the value associated with each list item in most browsers via the value attribute.

<ol>
    <li value="2">I am number two.</li>
    <li value="1">And I am number one.</li>
</ol>

Using jQuery, we can quickly create a short polyfill that will give us reversed functionality in browsers where it’s not natively understood.

(function () {
    if ( $("<ol>").prop("reversed") === undefined ) {
        $("ol[reversed]").each(function () {
            var $items = $(this).find("li");
            $items.attr("value", function ( index ) {
                return $items.length - index;
            });
        });
    }
}());

The code should be fairly straight-forward, but let me explain what is going on just to be on the safe side.

We start with an IIFE (Immediately Invoked Function Expression) which runs our code instantly, as well as keeps all declared variables out of the global namespace.

Next we kick things off with a quick feature-test on a freshly-made ol. In browsers that understand the reversed property, the default value is false. In other browsers, the property is undefined.

We then grab a collection of all lists that have the reversed attribute. Next we cycle over this collection of lists, assigning the list items from each iteration to the variable $items.

We then take the $items jQuery collection and begin an implicit loop setting the value attribute of each item in the collection to collection.length - itemIndex. If there are five items in the list, and we’re on index 0 (first item), five will be returned as the value. When we enter the second iteration, and the index is 1, (5-1) will be returned, resulting in a value of 4.

Voila, we have polyfilled reversed. There are some shortcomings; this won’t work on lists created after this code runs. You could however put this logic into a function declaration and call it after the DOM has been updated asynchronously, so as to handle any new lists that might have populated the document.

One very important note when using jQuery as a means to polyfill older browsers, you will have to use jQuery 1.8.3 or below. Around the release of jQuery 1.9, a lot of antiquated code was removed that once propped up older browsers. jQuery 1.9 and forward are meant for modern browsers, and as such they will fail in carrying your polyfilled functionality back to older browsers like Internet Explorer 6, and its contemporaries.

IE10 Gotcha: Animating Pseudo-Elements on Hover

I came across a question on Stack Overflow today asking about animating pseudo-elements. This is something I have lamented over in the past since it’s been a supported feature in Internet Explorer 10 for a while, but only recently implemented in Chrome version 26. As it turns out, there was a small edge-case surrounding this feature that I had yet to encounter in Internet Explorer 10.

The following code does two things; first it sets the content and transition properties of the ::after pseudo-element for all paragraphs (there’s only one in this demo). Next, in the :hover state (or pseudo-class), it changes the font-size property to 2em. This change will be transitioned over 1 second, per the instruction in the first block.

p::after {
    content: ", World.";
    transition: font-size 1s;
}

p:hover::after {
    font-size: 2em;
}

Although simple and straight forward, this demo (as-is) doesn’t work in Internet Explorer 10. Although IE10 supports :hover on any element, and IE10 supports pseudo-elements, and IE10 supports animating pseudo-elements, it does not support all three, together, out of the box.

If you change your markup from using a paragraph to using an anchor element, the code begins to work. This seems to suggest some type of regression has taken place, causing Internet Explorer 10 behave similar to Internet Explorer 6 where only anchors could be “hovered” — support for :hover on any element was added in version 7.

Upon playing with this a bit, there does appear to be a few work-arounds:

  1. Change your markup
  2. Use sibling combinators in your selector
  3. Buffer a :hover selector on everything

Let’s look at these one at a time.

Change Your Markup

Rather than having a paragraph tag, you could nest a span within the paragraph, or wrap the paragraph in a div. Either way, you’ll be able to modify your selector in such a way so as to break up the :hover and ::after portions. When the user hovers over the outer element, the content of the inner-element’s pseudo-element is changed.

I don’t like this option; it’s far too invasive and demanding.

Use Sibling Combinators in Your Selector

This was an interesting discovery. I found that if you further modify your selector to include consideration for sibling elements, everything is magically repaired. For instance, the following targets our paragraph based on some other sibling paragraph:

p~p:hover::after {
    font-size: 2em;
}

This is interesting; it doesn’t break the connection between :hover and ::after, but it does modify the root of the selector, which somehow causes things to repair themselves.

What I don’t like about this approach is that it requires you to explicitly declare the sibling selector, as well as the element you’re wishing to target. Of course, we could fix this a bit by targeting a class or id, as well as going with a different sibling combinator:

*~.el:hover::after {}

This targets any element with the .el class that is a sibling of any other element. This gets us a little closer, but still, a bit messy. It requires us to modify every single selector that is part of a pseudo-element animation.

Buffering the :hover Selector

Provided with the question on Stack Overflow was one solution to the problem. As it turns out, if you provide an empty set of rules for the hover state, this fixes all subsequent attempts to animate pseudo-element properties. What this looks like follows:

p:hover {}

p::after {
    content: ", World.";
    transition: font-size 1s;
}

p:hover::after {
    font-size: 2em;
}

Oddly enough, this small addition does indeed resolve the issue. Given the nature of CSS, you can even drop the p portion, and just go with the following fixing it for all elements:

:hover{}

This too results in a functioning Internet Explorer 10 when it comes to animating pseudo-element properties.

Experiment: http://jsfiddle.net/jonathansampson/N4kf9/