Category Archives: JavaScript

IE doesn’t have an emulation feature

“I felt a great disturbance in the Force, as if millions of voices suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.”

Document Modes in Internet Explorer are probably one of the most misunderstood features of the browser itself. The cause of the confusion doesn’t lie exclusively on the end-user either, Microsoft has shipped a Document Mode switcher in the F12 Developer Tools under an Emulation tab for a while now. As a result, it’s not uncommon for people to think that Internet Explorer 11 can emulate Internet Explorer 8 for the purposes of testing simply by switching the Document Mode.

“I have a very bad feeling about this.”

So what the heck is this feature? Why does it exist, and what should we call it if not “emulation”? I recently discussed this on twitter with a friend, and my frustration over the matter drove me to really sit and think about a proper way to illustrate the purpose of this feature. After some thought, I arrived at a comparison with an A99 Aquata Breather. You know what that is, right? No? Let me explain.

The A99 Aquata Breather (seen in Star Wars, episodes I and III) allowed Jedi to survive for up to 2 hours in an otherwise hositle environment. For instance, on their long-swim to the underwater city on Naboo Qui-Gon Jinn and Obi-Wan Kenobi used Aquata Breathers to stay alive long enough to enter the Hydrostatic Bubbles of Otoh Gunga (I really hope this is making sense…).

Lets bring this back to Internet Explorer and Document Modes. Just like Qui-Gon and Obi-Wan weren’t able to breathe under water (unassisted, that is), a site constructed in/for Internet Explorer 8 may not necessarily work in Internet Explorer 11 unassisted. So to ensure a more enjoyable experience, and avoid certain death, Jedi swim with an assisted-breathing device, and web developers send x-ua-compatible headers with their HTTP responses (or a meta-tag, if that’s your thing).

“That’s no moon. It’s a space station!”

So that Document Mode switcher in Internet Explorer’s F12 Developer Tools is not for seeing how well your modern site will work in a legacy browser, but rather how your legacy site will work in a modern browser.

Suppose a developer built an intranet application for her client in late 2009, when the client was running Internet Explorer 8 internally on a thousand machines. Several years later, the client calls the developer and informs her that they are planning on moving to Windows 8.1, and Internet Explorer 11. At this point, the developer can open Internet Explorer, navigate to the application, and see how well it will work for her client in a modern version of IE. Should she encounter differences in layout or scripting, she could switch the Document Mode to that of Internet Explorer 8 and see if this resolves the problem. (See also Enterprise Mode for IE)

Note the directionality — Document Modes are for making older sites work in modern browsers by creating (to some degree) environmental configurations the site relies on. This doesn’t turn the browser into an older version of itself; it merely satisfies a few expectations for the sake of the application.

“This is some rescue! You came in here, but didn’t you have a plan for getting out?”

Like the A99 Aquata Breather, a Jedi developer should only be expected to rely on a legacy document mode for a short period of time. The x-ua-compatible approach should be seen as a splint to keep your broken leg together until you can get some serious medical attention for it. It’s meant to keep you upright with the intent that you should be able to soon stand alone, unassisted. Don’t use a breather unless you’re swimming to a place that has air to breathe, and don’t use x-ua-comptabile headers or meta-tags without intending to fix whatever breaks your site in a modern browser.

If you develop in a modern version of Internet Explorer and do need to see how your site will render in an older version of IE, your best option is to download a free virtual machine from modern.ie, get an account on browserstack.com, or do your testing the next time you visit Grandma’s for dinner.

Simple Throttle Function

When you’re handling events like window resize, keystrokes, mouse movement, and scrolling you need to be careful how many times you call heavy-handed functions. For instance, if you’re leveraging JavaScript to create responsive UIs in legacy browsers you probably don’t want to run complicated DOM-altering handlers 10 times a second when far fewer runs would be sufficient. Enter throttling.

Throttling is a means by which we can limit the number of times a function can be called in a given period. For instance, we may have a method that should be called no more than 5 times per second.

function callback () {
    console.count("Throttled");
}

window.addEventListener("resize", throttle( callback, 200 ));

In the above, our throttle function takes a callback, and a rate limiter. It will fire the callback at most once every 200 milliseconds, or at most 5 times per second. So what does the actual implementation of the throttle function look like?

function throttle (callback, limit) {
    var wait = false;                 // Initially, we're not waiting
    return function () {              // We return a throttled function
        if (!wait) {                  // If we're not waiting
            callback.call();          // Execute users function
            wait = true;              // Prevent future invocations
            setTimeout(function () {  // After a period of time
                wait = false;         // And allow future invocations
            }, limit);
        }
    }
}

The above is fairly straight forward, but let’s discuss it briefly anyway.

The overall solution is to replace the user’s function with our own function. This allows us to add extra logic to the mix, such as how frequently the function can be invoked. This is why our throttle function inevitably returns a new function.

We manage our state with a variable called wait. When wait is true, our door is closed. When wait is false, our door is open. So by switching this value from true to false, we can allow the user’s function to be invoked. And by switching wait from false to true, we prevent any calls to the user function.

Anytime we switch this, we setup a timeout that will toggle it back to false after a period of time. This is where the 200 milliseconds comes in – we’re allowing the user to determine the length of time our door should be shut.

You should also use throttling for function that get called dozens, or hundreds of times. It’s rare that you actually want things to be called that often. The implementation in this post is fairly simple (I was shooting for conciseness), but more complicated solutions exist such as those in lo-dash, underscore, and more.

I created a fiddle with a throttled and and non-throttled callback. Tying this to the window resize event, I adjusted my window width/height briefly and let the number of calls speak for themselves:

Throttled results vs Non-throttled results

Throttled results vs Non-throttled results

Browsers – Broken By Design

I set out a few hours ago to write about a problem I had experienced in Safari, Chrome, Firefox, and Internet Explorer. Opera worked as I expected, which had me convinced the other vendors merely dropped the ball, and shipped partial implementations for the :visited and :link pseudo-classes.

My CSS was very simple:

:link {
    color: white;
    background: red;
}

:link::before {
    content: "Pseudo: ";
}

A color set for unvisited links, followed by some content in the ::before pseudo element on all unvisited links. So when I dropped in two links (one visited, one not) I expected to see the following:

image00 copy

Instead, what I saw was a litter of inconsistent results between Chrome, Safari, Internet Explorer, Firefox, and Opera. The above image came from Opera. It performed exactly as I suspected. The others, not so much.

Chrome and Firefox gave similar results; both set “Pseudo” as a virtual first-child of both anchors (though I only asked to have it on unvisited links), and leaked the background color from :link into :visited.

Internet Explorer did better than Chrome and Firefox; it preserved the “pseudo” text, but left the background of the visited link untouched. I was still very perplexed as to why it was spilling values from :link over into :visited.

You can observe something interesting in Internet Explorer – :visited links start visually identical to :link links, but are transitioned into their :visited state immediately thereafter. Setting a transition on all links reveals the change when the document loads:

a {
    transition: background 10s;
}

You’ll see the :link links remain unchanged but :visited links slowly adopt their end style. This is not the case for Chrome, Firefox, Opera, or Safari.

Safari appeared to be the most broken of them all. It duplicated everything. Further, I attempted to set additional styles to :visited, and it completely ignored them.

Discovering History

I found it incredible that all of these browsers, with the exception of Opera, would get this so wrong. So like any good developer I took to the web to see who else might be complaining about this, which is when I came across a Stack Overflow post suggesting this was somehow related to security concerns.

This offered another search vector; security issues. That was the needed catalyst for opening up a fascinating story about yet another creative attempt by developers to put their nose where it may not belong – in the client browser’s history.

Browsers typically style links with a default color to indicate whether it is a visited link or not. We’ve all seen those blue and purple links before – they’ve become quite at home on the web. Somebody got the (seriously) brilliant idea to create anchors ad hoc, and use getComputedStyle to see if the links represented visited or unvisited locations.

Mozilla reports that you could test more than 200,000 urls every minute with this approach. As such, you could – with a great deal of accuracy – fingerprint the user visiting your site based on the other urls they have visited; and browser history runs deep.

Scorched Earth Solution

The solution implemented by Firefox (and apparently others) was to greatly reduce the presence of visited links. They started by instructing functions like getComputedStyle and querySelectorAll to lie (their words, not mine) about their results. Sure enough, a simple check confirms that though my :visited links have a different font color than :link links, getComputedStyle says they’re the same rgb value.

Mozilla’s Christopher Blizzard continues with a very key point (emphasis added):

You will still be able to visually style visited links, but you’re severely limited in what you can use. We’re limiting the CSS properties that can be used to style visited links to color, background-color, border-*-color, and outline-color and the color parts of the fill and stroke properties. For any other parts of the style for visited links, the style for unvisited links is used instead. In addition, for the list of properties you can change above, you won’t be able to set rgba() or hsla() colors or transparent on them.

And there it was, the explanation that tied all of the seemingly broken pieces together. The browser would instruct :visited to share property values with :link. The browsers aren’t broken; they were designed to fail.

Test Confirmations

I wanted to explore the now-understood implementation a bit further, so I tried a few things. Querying the DOM for :visited links (expecting 1 of 2) was my first decision. I also queried the DOM for :link as well – expecting 2 due to the new security:

Browser Number of :visited Number of :link
Chrome 0 2
Firefox 0 2
Internet Explorer 0 2
Safari 0 2
Opera 1 2

Nearly all of the browsers report no :visited link, with the exception of Opera. All browsers report a total of 2 links when querying :link.

So it seems like you can still get a list of visited sites in Opera. Internet Explorer, Firefox, Chrome, and Safari all prevented the exploit from being carried out. But one thing about Internet Explorer intrigued me; remember the transitioning I spoke of earlier?

Internet Explorer Still Vulnerable?

I noticed that if I set transition: background 10s on anchors in Internet Explorer you would slowly see all visited links slowly tween into their end state. Transitions fire the transitionend event when they complete, so could we still get the user’s visited links in Internet Explorer?

The CSS to test this is very simple:

a { transition: background .01s }
:link { background: red }

And the following JavaScript:

/* Anchor var, fragment, and array of guesses */
var anchor;
var fragment = document.createDocumentFragment();
var websites = [
    'http://bing.com',
    'http://google.com',
    'http://jquery.com',
    'http://appendto.com',
    'http://stackoverflow.com'
];

/* Listen for completed transitions */
document.addEventListener("transitionend", function (event){
    console.log(event.target.href);
}, false);

/* Create a link for each url */
websites.forEach(function(url){
    anchor = document.createElement("a");
    anchor.setAttribute("href", url);
    anchor.innerHTML = url;
    fragment.appendChild( anchor );
});

/* Add our document fragment to our DOM */
document.body.appendChild(fragment);

Immediately upon being added to the DOM our :visited anchors will start to transition away from looking like a :link anchor. Once that transition is complete, we learn what the URL is, and confirm that the user has visited that site.

Closing Thoughts

I was reminded today just how exciting our industry is. People are constantly experimenting, learning new things, and sharing that knowledge. Industry experts, developers, designers, and browser vendors are always working to shift and adjust in light of the ever-evolving web.

Although the CSS2.1 Specification says something, that doesn’t make it so – even if the feature is 15 years old. If the ends justify the means, browser vendors will (and did) fire-bomb their own code-base to protect the end user.

Finally, we’re all in this together; we ought to work together. Microsoft launched the @IEDevChat twitter account not too long ago to assist developer’s who are attempting to follow web-standards. Then even organized a group of developers who wanted to volunteer and help build a better web; you can find (and join) it online at http://useragents.ie.

I’m sure there’s more history here that I’ve yet to discover, but what I have seen is enough to pique my interest. If you have anything else to share, or can contribute to the vulnerability test approach above, please leave a comment below. I may swing back around and check out the other browsers later on.

Taming the Lawless Web with Feature Detection

The Lawless Web

Over the past 15 years or so that I’ve been developing websites, I’ve seen various conventions come and go. Common practices arise, thrive for a while, and die. I, myself, spent a great portion of my past hammering out lines of tabular markup (and teaching classrooms to do the same) when working on my layout.

When I wasn’t building complicated table-based layouts littered with colspans, I was double-wrapping elements to iron out the differences between browser-interpretations of the box-model.

Fortunately, these practices have died, and others like them continue to die every day.

The Web Hasn’t Been Completely Tamed

While atrocities like table-based layouts are quickly becoming a thing of the past, other old habits have chosen not to go as peacefully. One that has managed to stick around for some time is user-agent sniffing; the practice of inferring a browser’s abilities by reading an identification token.

var browser = {
     vendor: navigator.userAgent.match(/Chrome|Mozilla|Opera|MSIE/),
    version: navigator.userAgent.match(/\d+/)
};

if ( browser.vendor == “Mozilla” && browser.version == 5 ) {
    // This matched Chrome, Internet Explorer, and a Playstation!
    // Nailed it.
}

Since shortly after the dawn of the epoch, developers have been scanning the browser’s navigator.userAgent string for bits and pieces of data. They then couple this with a knowledge of which browsers support which features, and in conclusion make judgements against the user’s browser and machine. Sometimes their conclusions are correct, but the ends don’t justify the means.

Don’t Gamble With Bad Practices

Sniffing a browser’s user agent string is a bit like playing Russian Roulette. Sure, you may have avoided the loaded chamber this time, but if you keep playing the game you will inevitably increase your chances of making a rather nasty mistake.

The navigator.userAgent string is not, and has never been, immutable. It changes. It gets modified. As browsers are released, as new versions are distributed, as browsers are installed on different operating systems, as software is installed in parallel to the browser, and as add-ons are installed within the browser, this string gets modified. So what you sniff today, is not what you will sniff tomorrow.

Have a quick look at a handful of examples for Chrome, Firefox, Opera, and Internet Explorer. Bottom line is that you can’t be sure what you’ll get when you approach the user agent string buffet. As is immediately clear, the above approach would not even match version 12 of Opera consistently with its variations.

So, have I thoroughly convinced you that sniffing is bad? And have you sworn it off as long as you shall live? Let me give you something to replace it; something reliable that will nearly never let you down – feature detection.

Go With What’s Reliable

Feature detection is the practice of giving the browser a test, directly related to the feature you need, and seeing whether it passes or fails. For instance, we can see if the browser supports the <video> tag by quizzing it on whether it places the correct properties on a freshly-created instance of the element. If the browser fails to add the .canPlayType method to the element, we can assume support isn’t there, or isn’t sufficient to be relied upon.

This approach doesn’t require us to make any assumptions. We don’t couple poorly-scavenged data with naive assumptions about which browsers support which feature. Rather, we invite the browser out onto our stage, and we demand it do tricks. If it succeeds, we reward it. If it fails, we ditch it and take another approach.

What Does “Reliable” Look Like?

So how exactly do we quiz the browser? Feature-detections come in all shapes and sizes; some very small and concise, while others can be somewhat lengthy and verbose – it all depends on what feature you’re trying to test for. Working off of our example of checking for <video> support, we could do the following:

if ( document.createElement(“video”).canPlayType ) {
    // Your browser supports the video element
} else {
    // Aw, man. Gotta load Flash.
}

This approach creates a brand new video element for testing. We then check for the presence of the .canPlayType method. If this property exists, and points to a function, the expression will be truthy and our test will pass. If the browser doesn’t understand what a video element is, there won’t be a .canPlayType method, and this property will instead be undefined – which is falsy.

The Work Has Been Done For You

You could be rightly freaked out about now, thinking about all of the tests you would have to write. You might want to see if localStorage is supported, gradients, video, audio, or geolocation. Who on Earth has time to write all of those tests? Let alone learn the various technologies intimately enough to do so? This is where the community has saved the day.

Modernizr is an open-source library packed full of feature-detection tests authored by dozens of talented developers, and all bundled up in an easy to use solution that you can drop into any new project. You build your custom download of Modernizr, drop it into your project, and then conditionally deliver the best possible experience to all visitors.

Modernizr exposes a global Modernizr object with various properties. You can test for the support of certain features simply by calling properties on the Modernizr object.

if ( Modernizr.video ) {
    // Browser supports the video element
}

Additionally, it can optionally add classes to your <html> element so your CSS can get in on the action too! This allows you to make conditional adjustments to your document based upon feature support.

html.video {
    /* Woot! Video support. */
}

html.no-video #custom-video-controls {
    /* Bummer. No video support. */
    display: none;
}

Building A Better Web

By using feature-detection, and forever turning your back on user agent sniffing, you create a more reliable web that more people on more devices and browsers can enjoy. No longer are you bound to hundreds upon hundreds of user agent strings, or naive assumptions that will come back and bite you in the end. Feature-detection liberates you so that no babysitting of the browser landscape is necessary.

Code for features and APIs. Don’t code for specific browsers. Don’t get caught writing “Best viewed in WebKit” code – build something for everybody. When you focus on features, rather than ever-changing strings that don’t give you much insight, you help make the Internet better for us all. Do it for the children ;)

You’re Not Alone

When you set out to do things the right way, you will undoubtedly run into obstacles and complications. Let’s face it, quality work often times requires more of an investment than the slop that is so often slung out onto the web. The great news though is that you’re not alone – the web is more open than ever before, full of those willing to help you along the way.

Drop in to Stack Overflow to catch up on the various feature detection questions being asked there. Or if you’re trying to get Chrome and Internet Explorer to play nice with one another, post a question on Twitter using the #IEUserAgents hashtag. This notifies an “agent” to come to your aid.

Just be sure to pay it forward. As you learn, and grow, help others to do so as well. Maybe even write a few feature-tests and contribute them back into Modernizr along the way.

Progress Dots

From time to time I need to throw together a small script to do something relatively simple. Today I had to write something that would animate a series of dots. You’ve seen it, those little lines of dots that grow and shrink to give indication that something, somewhere is happening.

It’s a relatively straight forward script, so I’ll just drop it in here with comments:

(function () {

    "use strict";

    // Find #dots, run setDots every 500ms, define your dot, and set size limit
    var dots = document.getElementById("dots"),
        loop = setInterval(setDots, 500),
        _dot = ".",
        size = 3;

    function setDots () {
        // #dots will be truncated when limit is reached, otherwise grows by one
        dots.innerHTML = (dots.innerHTML.length >= size) ? _dot : (dots.innerHTML + _dot);
    }

}());

It’s worth noting that you could probably whip up something similar with CSS alone using pseudo-elements, animation, @keyframes, and content, or even animating a sprite’s location on a background. Of course browser support would be far more limited.

Polyfill for Reversed Attribute on Lists

Lists in HTML typically appear in decimal fashion, starting from 1 and going to n. Sometimes these are instructed to appear in reversed order via the reversed boolean attribute on the ol element.

<ol reversed>
    <li>I am number two.</li>
    <li>And I am number one.</li>
</ol>

Some browsers don’t understand the reversed attribute, and thus will not reverse the list. Although you cannot reverse the entire list in some browsers with this attribute, you can manually override the value associated with each list item in most browsers via the value attribute.

<ol>
    <li value="2">I am number two.</li>
    <li value="1">And I am number one.</li>
</ol>

Using jQuery, we can quickly create a short polyfill that will give us reversed functionality in browsers where it’s not natively understood.

(function () {
    if ( $("<ol>").prop("reversed") === undefined ) {
        $("ol[reversed]").each(function () {
            var $items = $(this).find("li");
            $items.attr("value", function ( index ) {
                return $items.length - index;
            });
        });
    }
}());

The code should be fairly straight-forward, but let me explain what is going on just to be on the safe side.

We start with an IIFE (Immediately Invoked Function Expression) which runs our code instantly, as well as keeps all declared variables out of the global namespace.

Next we kick things off with a quick feature-test on a freshly-made ol. In browsers that understand the reversed property, the default value is false. In other browsers, the property is undefined.

We then grab a collection of all lists that have the reversed attribute. Next we cycle over this collection of lists, assigning the list items from each iteration to the variable $items.

We then take the $items jQuery collection and begin an implicit loop setting the value attribute of each item in the collection to collection.length - itemIndex. If there are five items in the list, and we’re on index 0 (first item), five will be returned as the value. When we enter the second iteration, and the index is 1, (5-1) will be returned, resulting in a value of 4.

Voila, we have polyfilled reversed. There are some shortcomings; this won’t work on lists created after this code runs. You could however put this logic into a function declaration and call it after the DOM has been updated asynchronously, so as to handle any new lists that might have populated the document.

One very important note when using jQuery as a means to polyfill older browsers, you will have to use jQuery 1.8.3 or below. Around the release of jQuery 1.9, a lot of antiquated code was removed that once propped up older browsers. jQuery 1.9 and forward are meant for modern browsers, and as such they will fail in carrying your polyfilled functionality back to older browsers like Internet Explorer 6, and its contemporaries.

Creating a (autofocus) Polyfill

In the ever-changing world of browser topography, every so often there exists an expectation in the community that some feature or API will be available in a browser, and it’s not.

We have an approach to mitigating these types of issues though, and we call it polyfilling. To quote Remy Sharp, “A polyfill, or polyfiller, is a piece of code (or plugin) that provides the technology that you, the developer, expect the browser to provide natively. Flattening the API landscape if you will.”

A question was recently asked on Stack Overflow about automatically applying focus to an input element when the page loads. The first responses suggested using a tool like jQuery to target the element, and invoke its focus method. While this works, it adds additional overhead in environments where it’s not needed.

My initial solution to the question was to use the autofocus boolean attribute on the input element of choice. Shortly after suggesting this, I realized this attribute was only introduced in Internet Explorer at version 10, meaning users of 6, 7, 8, and 9 wouldn’t get the behavior the developer expected – perfect opportunity to author a polyfill.

Our goal in building a solid polyfill is to not make its presence known; we want the web developer to author their markup as they would normally, while we quietly teach the browser to make sense of the unknown autofocus attribute.

So we start with basic HTML:

<input name="name" placeholder="John Doe" autofocus />

In browsers like Internet Explorer 10, Chrome 26, or Firefox 20, focus will be immediately given to this element upon page load. But for Internet Explorer 6-9 and versions of Firefox prior to 4.0, we’ll have to add a bit of assistance.

We start by first performing a feature-detect for the autofocus property in a freshly-created input element. In browsers that support this property, the default value is false:

// Proceed only if new inputs don't have the autofocus property
if ( document.createElement("input").autofocus === undefined ) {

}

This condition will only evaluate to true in browsers that don’t natively understand the autofocus attribute, and as such, this will be our sandbox to play in while constructing our polyfill. We’ll need to go through a few steps for this polyfill, so let’s have a look at what they are:

  1. Cycle over all forms
  2. Cycle over all elements in each form
  3. Check attributes of each element for autofocus
  4. If found, trigger focus, and break out of all loops (only one instance of autofocus should exist on in a document, otherwise you’ll get inconsistent cross-browser results)

Our first step in the above list is to cycle over all the forms. We can use the document.forms collection to find all of the forms on the document. We’ll also create a variable to track our index in the collection of forms. Lastly, we’ll add a label to this loop so that we can break from it later:

// Get a reference to all forms, and an index variable
var forms = document.forms, fIndex = -1;

// Begin cycling over all forms in the document
formloop: while ( ++fIndex < forms.length ) {

}

Next, within our formloop, we need to get a collection of the current form’s elements. Additionally, we will be creating another index variable to help us navigate this new collection:

// Get a reference to all elements in form, and an index variable
var elements = forms[ fIndex ].elements, eIndex = -1;

// Begin cycling over all elements in collection
while ( ++eIndex < elements.length ) {

}

The third step in our process is to check for the autofocus attribute in each element. There are various ways you can do this, and I explored several of them. For instance, you could use the hasAttribute method, but this is only supported in Internet Explorer from version 8 (meaning you’d have to polyfill it in earlier versions).

In the end I wound up checking for the attribute as a key in the element.attributes collection. If not present, the result will be falsey, and the loop will skip to the next element in the nodeList:

// Check for the autofocus attribute
if ( elements[ eIndex ].attributes["autofocus"] ) {

}

Finally, we arrive at our last step. We have found an element that has the attribute we’re looking for, and it’s time to focus on it, break out of the loops, and call it quits.

// If found, trigger focus
elements[ eIndex ].focus();

// And break out of outer loop
break formloop;

This is where the formloop label comes in handy; a simple break would have broken the element loop, but not the forms loop, meaning subsequent form elements (and their children) would be evaluated. Since only one instance of autofocus should appear in a document, this is not the behavior we want.

The final result is wrapped in an IIFE (Immediately Invoking Function Expression) which runs our code immediately, as well as prevents our variables from appearing in the global namespace, potentially causing harm to the rest of the document or application.

(function () {

    // Proceed only if new inputs don't have the autofocus property
    if ( document.createElement("input").autofocus === undefined ) {

        // Get a reference to all forms, and an index variable
        var forms = document.forms, fIndex = -1;

        // Begin cycling over all forms in the document
        formloop: while ( ++fIndex < forms.length ) {

            // Reference all elements in form, and an index variable
            var elements = forms[ fIndex ].elements, eIndex = -1;

            // Begin cycling over all elements in collection
            while ( ++eIndex < elements.length ) {

                // Check for the autofocus attribute
                if ( elements[ eIndex ].attributes["autofocus"] ) {

                    // If found, trigger focus
                    elements[ eIndex ].focus();

                    // And break out of outer loop
                    break formloop;

                }

            }

        }

    }

}());

There you have it, a polyfill that gives autofocus behavior to browsers that don’t natively support it. By dropping this in to your website, you can now carry on your merry way using great features of HTML like this, without worrying what negative effect it might have on your Internet Explorer 6, 7, 8, or even 9 visitors.

Phenomenal Day thanks to jQuery and Windows 8

Friday, March 29th, 2013, was a phenomenal day in this developer’s life.

In late 2012 my employer, appendTo, began working with Microsoft on an extremely exciting project – preparing a version of the web’s most beloved JavaScript library, jQuery, for Windows RT and use in Windows Store applications. This was particularly exciting for me since jQuery is one of my most active tags on Stack Overflow.

Towards the end of our work in preparing this special version of jQuery, I had the great pleasure of working with Elijah Manor on material that would be presented at //build/ 2012 by one of appendTo’s founders, Mike Hostetler.

We had successfully delivered a version of jQuery that worked with the new security model in Windows Store applications. But this wasn’t the end-goal; none of us wanted to maintain a clone of jQuery that was engineered specifically for Windows Store applications.

The project was a huge success, I was on cloud 9 having gotten to work with such phenomenal developers, and fantastic partners, on such a game-changing project. But again, our work wasn’t done – we merely wet our appetites for far better results. We wanted jQuery itself to work in Windows Store applications, not some sufficiently-similar clone of jQuery.

Our focus was then turned to working more closely with jQuery core contributors, which resulted in me getting to meet even more amazing people, like the President of the jQuery Foundation, Dave Methvin. Dave is one of those old-school hackers that could keep you tuned to his every word for hours on end; such an amazing guy. With guys like him at the helm, it’s easy to see why jQuery is such a success.

Moving forward, I began testing jQuery builds within Windows Store applications. This required forking, cloning, building, authoring and modifying unit tests, and more; it was a smörgåsbord of geek indulgence. At this time jQuery core contributors were working hard on version 2.0, the highly-anticipated version of jQuery that would shed itself of legacy support like a cicada liberated from it’s shell.

The timing couldn’t have been more perfect; Dave and the others were carefully extracting massive chunks of code from jQuery’s core that existed for no other reason than to support over a decade of antiquated browsers. In parallel to their efforts, appendTo was diving into versions of jQuery from 1.8.3 to pre-builds of 2.0, addressing any and all patterns considered “unsafe” in the new non-browser environment.

In the end, it all paid off. jQuery 2.0 appears to be ready for Windows Store applications, and every web-developer looking to try their hands in the lucrative market of Native Windows 8 applications authored in JavaScript (and now jQuery) has a familiar gateway into the new stomping grounds. It’s been an exciting project, and I’m incredibly humbled to have played a role in all of it.

jquery-pagesEverything peeked for me yesterday though, when co-worker Ralph Whitbeck and I had our article Building Windows Store Applications With jQuery 2.0 published on Nettuts+. Immediately following that, I was mentioned on the Interoperability @ Microsoft blog. And soon thereafter, mentioned on none other than TechCrunch, a site with over 1,600,000 tech-loving subscribers.

I imagine this type of thing happens everyday with various different developers. You put your nose down into a project that you are personally excited to be a part of. You work countless hours researching, writing, and testing. You meet a few exciting people along the way, and then one day you lift up your eyes to realize that you just had part in something truly amazing.

The web is such an exciting place, and contributing to open-source projects is an incredibly rewarding thing. Fortunately, for myself and all of my peers, getting your hands dirty with such amazing projects is easier today than it has ever been before thanks to services like GitHub.

As for me, I’m looking forward to seeing how jQuery 2.0 and beyond are used in Windows Store applications, and perhaps be so fortunate enough to contribute further to this amazing project in the future. You can do the same.

Taking the Internet Explorer Challenge

I’m going to use Internet Explorer 10 as my primary browser for one week. That’s one week without browsing, tweeting, or listening to turntable in Chrome (current “Browser of Choice”). That’s one week deep inside the bowels of the browser that burned me, and so many of my peers, so badly over the last decade. One week in the heart of the beast.

Why? Isn’t Internet Explorer supposed to be a thing of the past? A bad phase in the history of the web that we’re slowly recovering from? The faint image of the broken internet from yesterday, replaced by far more standards-compliant browsers like Firefox and Chrome? Well, yes, and no.

If I’m honest with myself, and everybody else, it’s not the browser that burned me. Internet Explorer dominated the market back in the day when I got excited to see 3kb/sec downloads. It was installed along side Netscape Navigator, but won me over pretty quickly.

The browser won a lot of people over, including corporations who went on to develop internal applications that depended on its implementation of HTML, CSS, and J(ava)Script. And then the world changed around them; around all of us.

Dial-up was becoming a thing of the past, and new browsers were creeping into the scene. Firefox rose like a phoenix from the ashes of Netscape, and then Google got into the game with Chrome. These later browsers took advantage of faster and more consistent connections and offered streamlined updates that happened silently in the background.

Internet Explorer was still dominating in the global market, but these antiquated versions from yesteryear were still in circulation, and still being actively used. While they were once the apple of our eye, we quickly jumped from them to the new breed of browsers. It wasn’t that Internet Explorer 3-8 were bad – they weren’t. It was the fact that the world around them changed, and changed quickly.

Fast forward to Internet Explorer 10; it is new, and has great support for standards. Most importantly though, it hints at having the capacity to auto-upgrade like its competition. So I was curious, do I have any reason to dislike Internet Explorer any longer? Is it just as good as Google Chrome, Mozilla Firefox, or Opera? What better way to find out than to use it as my primary Browser of Choice for one week.

Sunday, Bloody Sunday

It’s really tough retraining my mind to click my Internet Explorer icon instead of my Chrome icon. The icon that was once relegated to testing and debugging my code in yesterday’s browser is now my go-to destination for all casual browsing, tweeting, and more.

So far I’ve been using Internet Explorer for Bootstrap work, blogging from WordPress, Tweeting with TweetDeck, Facebook, and casual browsing online. While I didn’t spend much time in the browser today, I did do some tweeting from @tweetdeck’s web-application, and noticed that the scrollbars are pretty horrible looking – so I fixed them. Left and right for before and after.

ie-tweetdeck-css

Unfortunately it appears Twitter has neglected Internet Explorer when they developed their dark-theme. While they fixed up the styles for the scrollbar in WebKit, they failed to do anything remotely similar in Internet Explorer. I’ve notified them (and might have gotten their attention), so let’s hope they get word and make these changes. You can make similar changes in your applications using scrollbar-face-color and related properties (See the “see also” section on the previous link).

I must admit, it would be awesome if we could control the properties of the scrollbar by setting properties on a pseudo-element instead of on any element that scrolls. It’s worth noting that in this territory, there currently is no w3c-accepted standard.

IE uses a proprietary extension in the form of prefixed-properties, and WebKit uses a proprietary extension in the form of prefixed-pseudo-elements. One can only hope there will be a consensus and convergence in implementation.

No Case of the Mundays

Today was my first actual day of work in Internet Explorer. I mean, I’ve opened it up here and there during work in the past, but today I spent all of my time in Internet Explorer – and it went well. Nothing broke (that I noticed), nothing was complicated, all was well.

I did work on a CSS example for somebody today only to have them say “the antialiasing sucks,” but as it turned out they were viewing in Chrome, and I was viewing in Internet Explorer. Sure enough, if you create a hard-edge on an angled CSS gradient, it looks better in Internet Explorer than it does in Chrome. Here’s a quick comparison between Chrome 25 and Internet Explorer 10.

linear-gradient-aa

This particular gradient is 25deg – oddly enough, Chrome draws a beautifully-smooth line when the gradient is changed to 45deg rather than 25deg. Don’t ask me why – I haven’t the slightest clue.

OMG, Tuesday!

Wow, so today was a big day. When I started this blog post I was a little bummed that the only people able to take the IE Challenge would be those who have purchased Windows 8, or a machine that came loaded with Windows 8. This morning at 6am PST, the IEBlog announced Internet Explorer 10 for Windows 7!

On to my day though – today was spent largely in Google Docs. I noticed there were some small layout differences between Chrome and IE. For instance, the number of characters you can fit on one line before a wrap occurs differs in Chrome than it does in IE. This was particularly bothersome since one of my templates features right-aligned text left-padded with enough spaces to complete a full line of characters. All of these characters are then set with a dark background color.

I wound up taking an alternative route, replacing this approach with a single-cell single-row table, and setting the background color of the cell instead of the background color of the text. This was far better and gave far more consistent results between IE and Chrome. No clue who is to blame, or what was the means by which both browsers diverged from one another, but Chrome appeared to hold itself together better overall when it came to Google Docs.

Wednesday, Thursday, and Friday

So at this point it’s just difficult to find things to blog about. I instinctively click on the Internet Explorer 10 and go straight to my browsing. I don’t experience any issues with my favorite sites. I tweet using TweetDeck, check in with Mom on facebook, drop images into imgur, broadcast using Google Hangouts, manage a channel on YouTube, help out where possible on StackOverflow, and blog about all of it here in WordPress – no issues. Everything just works.

I don’t mean to give the impression that there isn’t any work to be done – there is, a lot. Sadly, while the Internet Explorer developers have been doing an amazing job with their product, we (the community) need to step up our game as well. We’ve got to start writing better code, and paying attention to language specifications and APIs, as well as the ways in which they’re implemented in various browsers.

I came across another “my site doesn’t work in IE” thread today. The website popped up in Quirks mode. Changing it to Standards didn’t magically fix it (as it does from time to time). Instead, pushing it to Standards mode resulted in an even more damaged experience.

The problem here was written all over the Source: apathy, carelessness, and so much more. We aren’t teaching passion for our craft today as well as we should. We teach people how to hammer out some markup, and then encourage them to feed off of the visual presentation, rather than the compliance to the specification (too). New developers code, and refresh, code, and refresh. Rarely, if ever, making a trip to w3.org.

In early 2012 I came across a rather iconic website that was rendering well in Chrome and Firefox, but the side-bar navigation (made up of lists within lists) was severely broken in Internet Explorer 9. The problem wound up being an unclosed list item; I modified the response in Fiddler, issued a new request in IE9, and the page magically worked. This designer tested their markup in a browser that deviated from what the code explicitly requested (nasty nesting), and instead did what it thought the designer intended. While this resulted in the proper formatting, it breaks the web.

This was something I grew to appreciate in Internet Explorer 9 – it was brutally honest. You got what you asked for (generally speaking), and when your document was looking ugly, it was because your code was telling it to. Other browsers would implement a dose of speculation into its rendering process, which adds far too much variability to the web.

Not Perfect, But Better

A week of using Internet Explorer as your primary browser convinces me of at least two things: 1) Microsoft has come a long way with their product, and it deserves a second look. And 2) There’s still work to be done. While surfing the web on Internet Explorer 10 doesn’t feel like sucking on broken glass, it still leaves some areas for improvement.

I try to be a little less critical about massive software, given the enormous complexity to create, develop, and maintain it, but there are areas where Internet Explorer 10 can be improved. I come across these items from time to time, and try to create simplified reproducible examples when possible to share with others (and with whomever I have access to within Microsoft). One such issue is the :hover bug related to table-rows. You can see this online at http://jsfiddle.net/jonathansampson/FCzyf/show/ (Tested in Internet Explorer 10 on Windows 8 only).

Even with its flaws, Internet Explorer is still leading the way in other areas. It’s currently one of the only browsers to support pseudo-element animation, the Pointer model (though you can get ‘Pointium‘), and some CSS level 4 text properties.

My biggest concern lately is not with the browser itself, Microsoft has convinced me that their product is reliable. What concerns me lately is with the release cycles. Can they keep this new browser breathing, or are they going to continue resuscitating it on a bi-annual cycle? If so, we’ll quickly find the web moving out ahead of it again, and history will repeat itself. I find a glimmer of hope in the newest “About Internet Explorer” dialog.

Install New Versions Automatically

In my sincere opinion, Internet Explorer (in just a few years) went from being the bane of my existence (#somuchdrama), to being a bright luminary, back competing in the pack of modern browsers. Will it stay among the pack? Time will tell. Until then, welcome back Internet Explorer.

Download Internet Explorer 10, give it a week, and post your results below.