Taming the Lawless Web with Feature Detection

The Lawless Web

Over the past 15 years or so that I’ve been developing websites, I’ve seen various conventions come and go. Common practices arise, thrive for a while, and die. I, myself, spent a great portion of my past hammering out lines of tabular markup (and teaching classrooms to do the same) when working on my layout.

When I wasn’t building complicated table-based layouts littered with colspans, I was double-wrapping elements to iron out the differences between browser-interpretations of the box-model.

Fortunately, these practices have died, and others like them continue to die every day.

The Web Hasn’t Been Completely Tamed

While atrocities like table-based layouts are quickly becoming a thing of the past, other old habits have chosen not to go as peacefully. One that has managed to stick around for some time is user-agent sniffing; the practice of inferring a browser’s abilities by reading an identification token.

var browser = {
     vendor: navigator.userAgent.match(/Chrome|Mozilla|Opera|MSIE/),
    version: navigator.userAgent.match(/\d+/)

if ( browser.vendor == “Mozilla” && browser.version == 5 ) {
    // This matched Chrome, Internet Explorer, and a Playstation!
    // Nailed it.

Since shortly after the dawn of the epoch, developers have been scanning the browser’s navigator.userAgent string for bits and pieces of data. They then couple this with a knowledge of which browsers support which features, and in conclusion make judgements against the user’s browser and machine. Sometimes their conclusions are correct, but the ends don’t justify the means.

Don’t Gamble With Bad Practices

Sniffing a browser’s user agent string is a bit like playing Russian Roulette. Sure, you may have avoided the loaded chamber this time, but if you keep playing the game you will inevitably increase your chances of making a rather nasty mistake.

The navigator.userAgent string is not, and has never been, immutable. It changes. It gets modified. As browsers are released, as new versions are distributed, as browsers are installed on different operating systems, as software is installed in parallel to the browser, and as add-ons are installed within the browser, this string gets modified. So what you sniff today, is not what you will sniff tomorrow.

Have a quick look at a handful of examples for Chrome, Firefox, Opera, and Internet Explorer. Bottom line is that you can’t be sure what you’ll get when you approach the user agent string buffet. As is immediately clear, the above approach would not even match version 12 of Opera consistently with its variations.

So, have I thoroughly convinced you that sniffing is bad? And have you sworn it off as long as you shall live? Let me give you something to replace it; something reliable that will nearly never let you down – feature detection.

Go With What’s Reliable

Feature detection is the practice of giving the browser a test, directly related to the feature you need, and seeing whether it passes or fails. For instance, we can see if the browser supports the <video> tag by quizzing it on whether it places the correct properties on a freshly-created instance of the element. If the browser fails to add the .canPlayType method to the element, we can assume support isn’t there, or isn’t sufficient to be relied upon.

This approach doesn’t require us to make any assumptions. We don’t couple poorly-scavenged data with naive assumptions about which browsers support which feature. Rather, we invite the browser out onto our stage, and we demand it do tricks. If it succeeds, we reward it. If it fails, we ditch it and take another approach.

What Does “Reliable” Look Like?

So how exactly do we quiz the browser? Feature-detections come in all shapes and sizes; some very small and concise, while others can be somewhat lengthy and verbose – it all depends on what feature you’re trying to test for. Working off of our example of checking for <video> support, we could do the following:

if ( document.createElement(“video”).canPlayType ) {
    // Your browser supports the video element
} else {
    // Aw, man. Gotta load Flash.

This approach creates a brand new video element for testing. We then check for the presence of the .canPlayType method. If this property exists, and points to a function, the expression will be truthy and our test will pass. If the browser doesn’t understand what a video element is, there won’t be a .canPlayType method, and this property will instead be undefined – which is falsy.

The Work Has Been Done For You

You could be rightly freaked out about now, thinking about all of the tests you would have to write. You might want to see if localStorage is supported, gradients, video, audio, or geolocation. Who on Earth has time to write all of those tests? Let alone learn the various technologies intimately enough to do so? This is where the community has saved the day.

Modernizr is an open-source library packed full of feature-detection tests authored by dozens of talented developers, and all bundled up in an easy to use solution that you can drop into any new project. You build your custom download of Modernizr, drop it into your project, and then conditionally deliver the best possible experience to all visitors.

Modernizr exposes a global Modernizr object with various properties. You can test for the support of certain features simply by calling properties on the Modernizr object.

if ( Modernizr.video ) {
    // Browser supports the video element

Additionally, it can optionally add classes to your <html> element so your CSS can get in on the action too! This allows you to make conditional adjustments to your document based upon feature support.

html.video {
    /* Woot! Video support. */

html.no-video #custom-video-controls {
    /* Bummer. No video support. */
    display: none;

Building A Better Web

By using feature-detection, and forever turning your back on user agent sniffing, you create a more reliable web that more people on more devices and browsers can enjoy. No longer are you bound to hundreds upon hundreds of user agent strings, or naive assumptions that will come back and bite you in the end. Feature-detection liberates you so that no babysitting of the browser landscape is necessary.

Code for features and APIs. Don’t code for specific browsers. Don’t get caught writing “Best viewed in WebKit” code – build something for everybody. When you focus on features, rather than ever-changing strings that don’t give you much insight, you help make the Internet better for us all. Do it for the children 😉

You’re Not Alone

When you set out to do things the right way, you will undoubtedly run into obstacles and complications. Let’s face it, quality work often times requires more of an investment than the slop that is so often slung out onto the web. The great news though is that you’re not alone – the web is more open than ever before, full of those willing to help you along the way.

Drop in to Stack Overflow to catch up on the various feature detection questions being asked there. Or if you’re trying to get Chrome and Internet Explorer to play nice with one another, post a question on Twitter using the #IEUserAgents hashtag. This notifies an “agent” to come to your aid.

Just be sure to pay it forward. As you learn, and grow, help others to do so as well. Maybe even write a few feature-tests and contribute them back into Modernizr along the way.

Visualizing :hover Propagation

The :hover pseudo-class can be tossed into a CSS selector to target the state of the element when the user’s cursor is currently positioned over the element. Due to the hierarchical structure of the DOM, anytime you activate the :hover state of an element, you activate it for all ancestral elements too. Wanting a quick illustration for this, I took to JSFiddle.


IE10 Gotcha: Animating Pseudo-Elements on Hover

I came across a question on Stack Overflow today asking about animating pseudo-elements. This is something I have lamented over in the past since it’s been a supported feature in Internet Explorer 10 for a while, but only recently implemented in Chrome version 26. As it turns out, there was a small edge-case surrounding this feature that I had yet to encounter in Internet Explorer 10.

The following code does two things; first it sets the content and transition properties of the ::after pseudo-element for all paragraphs (there’s only one in this demo). Next, in the :hover state (or pseudo-class), it changes the font-size property to 2em. This change will be transitioned over 1 second, per the instruction in the first block.

p::after {
    content: ", World.";
    transition: font-size 1s;

p:hover::after {
    font-size: 2em;

Although simple and straight forward, this demo (as-is) doesn’t work in Internet Explorer 10. Although IE10 supports :hover on any element, and IE10 supports pseudo-elements, and IE10 supports animating pseudo-elements, it does not support all three, together, out of the box.

If you change your markup from using a paragraph to using an anchor element, the code begins to work. This seems to suggest some type of regression has taken place, causing Internet Explorer 10 behave similar to Internet Explorer 6 where only anchors could be “hovered” — support for :hover on any element was added in version 7.

Upon playing with this a bit, there does appear to be a few work-arounds:

  1. Change your markup
  2. Use sibling combinators in your selector
  3. Buffer a :hover selector on everything

Let’s look at these one at a time.

Change Your Markup

Rather than having a paragraph tag, you could nest a span within the paragraph, or wrap the paragraph in a div. Either way, you’ll be able to modify your selector in such a way so as to break up the :hover and ::after portions. When the user hovers over the outer element, the content of the inner-element’s pseudo-element is changed.

I don’t like this option; it’s far too invasive and demanding.

Use Sibling Combinators in Your Selector

This was an interesting discovery. I found that if you further modify your selector to include consideration for sibling elements, everything is magically repaired. For instance, the following targets our paragraph based on some other sibling paragraph:

p~p:hover::after {
    font-size: 2em;

This is interesting; it doesn’t break the connection between :hover and ::after, but it does modify the root of the selector, which somehow causes things to repair themselves.

What I don’t like about this approach is that it requires you to explicitly declare the sibling selector, as well as the element you’re wishing to target. Of course, we could fix this a bit by targeting a class or id, as well as going with a different sibling combinator:

*~.el:hover::after {}

This targets any element with the .el class that is a sibling of any other element. This gets us a little closer, but still, a bit messy. It requires us to modify every single selector that is part of a pseudo-element animation.

Buffering the :hover Selector

Provided with the question on Stack Overflow was one solution to the problem. As it turns out, if you provide an empty set of rules for the hover state, this fixes all subsequent attempts to animate pseudo-element properties. What this looks like follows:

p:hover {}

p::after {
    content: ", World.";
    transition: font-size 1s;

p:hover::after {
    font-size: 2em;

Oddly enough, this small addition does indeed resolve the issue. Given the nature of CSS, you can even drop the p portion, and just go with the following fixing it for all elements:


This too results in a functioning Internet Explorer 10 when it comes to animating pseudo-element properties.

Experiment: http://jsfiddle.net/jonathansampson/N4kf9/

Taking the Internet Explorer Challenge

I’m going to use Internet Explorer 10 as my primary browser for one week. That’s one week without browsing, tweeting, or listening to turntable in Chrome (current “Browser of Choice”). That’s one week deep inside the bowels of the browser that burned me, and so many of my peers, so badly over the last decade. One week in the heart of the beast.

Why? Isn’t Internet Explorer supposed to be a thing of the past? A bad phase in the history of the web that we’re slowly recovering from? The faint image of the broken internet from yesterday, replaced by far more standards-compliant browsers like Firefox and Chrome? Well, yes, and no.

If I’m honest with myself, and everybody else, it’s not the browser that burned me. Internet Explorer dominated the market back in the day when I got excited to see 3kb/sec downloads. It was installed along side Netscape Navigator, but won me over pretty quickly.

The browser won a lot of people over, including corporations who went on to develop internal applications that depended on its implementation of HTML, CSS, and J(ava)Script. And then the world changed around them; around all of us.

Dial-up was becoming a thing of the past, and new browsers were creeping into the scene. Firefox rose like a phoenix from the ashes of Netscape, and then Google got into the game with Chrome. These later browsers took advantage of faster and more consistent connections and offered streamlined updates that happened silently in the background.

Internet Explorer was still dominating in the global market, but these antiquated versions from yesteryear were still in circulation, and still being actively used. While they were once the apple of our eye, we quickly jumped from them to the new breed of browsers. It wasn’t that Internet Explorer 3-8 were bad – they weren’t. It was the fact that the world around them changed, and changed quickly.

Fast forward to Internet Explorer 10; it is new, and has great support for standards. Most importantly though, it hints at having the capacity to auto-upgrade like its competition. So I was curious, do I have any reason to dislike Internet Explorer any longer? Is it just as good as Google Chrome, Mozilla Firefox, or Opera? What better way to find out than to use it as my primary Browser of Choice for one week.

Sunday, Bloody Sunday

It’s really tough retraining my mind to click my Internet Explorer icon instead of my Chrome icon. The icon that was once relegated to testing and debugging my code in yesterday’s browser is now my go-to destination for all casual browsing, tweeting, and more.

So far I’ve been using Internet Explorer for Bootstrap work, blogging from WordPress, Tweeting with TweetDeck, Facebook, and casual browsing online. While I didn’t spend much time in the browser today, I did do some tweeting from @tweetdeck’s web-application, and noticed that the scrollbars are pretty horrible looking – so I fixed them. Left and right for before and after.


Unfortunately it appears Twitter has neglected Internet Explorer when they developed their dark-theme. While they fixed up the styles for the scrollbar in WebKit, they failed to do anything remotely similar in Internet Explorer. I’ve notified them (and might have gotten their attention), so let’s hope they get word and make these changes. You can make similar changes in your applications using scrollbar-face-color and related properties (See the “see also” section on the previous link).

I must admit, it would be awesome if we could control the properties of the scrollbar by setting properties on a pseudo-element instead of on any element that scrolls. It’s worth noting that in this territory, there currently is no w3c-accepted standard.

IE uses a proprietary extension in the form of prefixed-properties, and WebKit uses a proprietary extension in the form of prefixed-pseudo-elements. One can only hope there will be a consensus and convergence in implementation.

No Case of the Mundays

Today was my first actual day of work in Internet Explorer. I mean, I’ve opened it up here and there during work in the past, but today I spent all of my time in Internet Explorer – and it went well. Nothing broke (that I noticed), nothing was complicated, all was well.

I did work on a CSS example for somebody today only to have them say “the antialiasing sucks,” but as it turned out they were viewing in Chrome, and I was viewing in Internet Explorer. Sure enough, if you create a hard-edge on an angled CSS gradient, it looks better in Internet Explorer than it does in Chrome. Here’s a quick comparison between Chrome 25 and Internet Explorer 10.


This particular gradient is 25deg – oddly enough, Chrome draws a beautifully-smooth line when the gradient is changed to 45deg rather than 25deg. Don’t ask me why – I haven’t the slightest clue.

OMG, Tuesday!

Wow, so today was a big day. When I started this blog post I was a little bummed that the only people able to take the IE Challenge would be those who have purchased Windows 8, or a machine that came loaded with Windows 8. This morning at 6am PST, the IEBlog announced Internet Explorer 10 for Windows 7!

On to my day though – today was spent largely in Google Docs. I noticed there were some small layout differences between Chrome and IE. For instance, the number of characters you can fit on one line before a wrap occurs differs in Chrome than it does in IE. This was particularly bothersome since one of my templates features right-aligned text left-padded with enough spaces to complete a full line of characters. All of these characters are then set with a dark background color.

I wound up taking an alternative route, replacing this approach with a single-cell single-row table, and setting the background color of the cell instead of the background color of the text. This was far better and gave far more consistent results between IE and Chrome. No clue who is to blame, or what was the means by which both browsers diverged from one another, but Chrome appeared to hold itself together better overall when it came to Google Docs.

Wednesday, Thursday, and Friday

So at this point it’s just difficult to find things to blog about. I instinctively click on the Internet Explorer 10 and go straight to my browsing. I don’t experience any issues with my favorite sites. I tweet using TweetDeck, check in with Mom on facebook, drop images into imgur, broadcast using Google Hangouts, manage a channel on YouTube, help out where possible on StackOverflow, and blog about all of it here in WordPress – no issues. Everything just works.

I don’t mean to give the impression that there isn’t any work to be done – there is, a lot. Sadly, while the Internet Explorer developers have been doing an amazing job with their product, we (the community) need to step up our game as well. We’ve got to start writing better code, and paying attention to language specifications and APIs, as well as the ways in which they’re implemented in various browsers.

I came across another “my site doesn’t work in IE” thread today. The website popped up in Quirks mode. Changing it to Standards didn’t magically fix it (as it does from time to time). Instead, pushing it to Standards mode resulted in an even more damaged experience.

The problem here was written all over the Source: apathy, carelessness, and so much more. We aren’t teaching passion for our craft today as well as we should. We teach people how to hammer out some markup, and then encourage them to feed off of the visual presentation, rather than the compliance to the specification (too). New developers code, and refresh, code, and refresh. Rarely, if ever, making a trip to w3.org.

In early 2012 I came across a rather iconic website that was rendering well in Chrome and Firefox, but the side-bar navigation (made up of lists within lists) was severely broken in Internet Explorer 9. The problem wound up being an unclosed list item; I modified the response in Fiddler, issued a new request in IE9, and the page magically worked. This designer tested their markup in a browser that deviated from what the code explicitly requested (nasty nesting), and instead did what it thought the designer intended. While this resulted in the proper formatting, it breaks the web.

This was something I grew to appreciate in Internet Explorer 9 – it was brutally honest. You got what you asked for (generally speaking), and when your document was looking ugly, it was because your code was telling it to. Other browsers would implement a dose of speculation into its rendering process, which adds far too much variability to the web.

Not Perfect, But Better

A week of using Internet Explorer as your primary browser convinces me of at least two things: 1) Microsoft has come a long way with their product, and it deserves a second look. And 2) There’s still work to be done. While surfing the web on Internet Explorer 10 doesn’t feel like sucking on broken glass, it still leaves some areas for improvement.

I try to be a little less critical about massive software, given the enormous complexity to create, develop, and maintain it, but there are areas where Internet Explorer 10 can be improved. I come across these items from time to time, and try to create simplified reproducible examples when possible to share with others (and with whomever I have access to within Microsoft). One such issue is the :hover bug related to table-rows. You can see this online at http://jsfiddle.net/jonathansampson/FCzyf/show/ (Tested in Internet Explorer 10 on Windows 8 only).

Even with its flaws, Internet Explorer is still leading the way in other areas. It’s currently one of the only browsers to support pseudo-element animation, the Pointer model (though you can get ‘Pointium‘), and some CSS level 4 text properties.

My biggest concern lately is not with the browser itself, Microsoft has convinced me that their product is reliable. What concerns me lately is with the release cycles. Can they keep this new browser breathing, or are they going to continue resuscitating it on a bi-annual cycle? If so, we’ll quickly find the web moving out ahead of it again, and history will repeat itself. I find a glimmer of hope in the newest “About Internet Explorer” dialog.

Install New Versions Automatically

In my sincere opinion, Internet Explorer (in just a few years) went from being the bane of my existence (#somuchdrama), to being a bright luminary, back competing in the pack of modern browsers. Will it stay among the pack? Time will tell. Until then, welcome back Internet Explorer.

Download Internet Explorer 10, give it a week, and post your results below.

Repeating Lateral Text Shadows

I’ve wanted to do this effect for some time now. It consists of one or more lines of text with variable indentation, and seemingly ever-repeating washed out copies of the text on both sides running off screen.

Decided to use text-shadow to implement it, though support is not as broad as I’d like to see. The primary pain in this is with the offset for the text-shadows. Due to the varying length of each line, you have to provide specific offsets for each line having the effect. This could a lot less painful if it were to be built with a preprocessor.

I’ve uploaded the demo over at Dabblet, JSFiddle, and CodePen. And for those who don’t have a browser that supports text-shadow, here’s a screenshot of the result:


One really neat side-effect of this approach is that because it uses shadows, it inherently gives off a responsive feel; in the demos above you can see that it extends as far left and right as its parent permits.

Update: Now Animated using @keyframes

Sometimes Chrome is the broken browser (Or, How Chrome failed me twice in one night)

For several years it’s been generally accepted by the web-development community that Internet Explorer is nothing more than a means by which developers are subjected a great deal of emotional and mental trauma. Well, that has changed in the last couple of versions, but most developers are still licking their slow-healing wounds.

One thing that bothers me though is how so many automatically feel as though these types of issues only exist with Internet Explorer, generally touting Google Chrome as the full-featured flawless alternative. Granted, Google has done an outstanding job with the Chrome browser, and I personally use it for most of my work, but Chrome is in no way special. It too is capable of causing a lot of upset – such was the case this evening for me.

I worked on a couple of marquee demos over the last few days which gave me another idea. I wanted to cover an element with its ::after pseudo-element, apply a transparent-to-black background on the pseudo-element, and then animate it off to the right using @keyframes. I didn’t want this to be visible as it moved off to the right, so I applied a parent element of the same dimensions, and set its overflow to hidden. Queue the tears.

Chrome 24, wouldn’t respond. It just sat there, frozen. I could have sworn I did something wrong, but the demo was so simple in its construction. Where was I going wrong? I ended up testing the same demo in Internet Explorer 10, and found it it immediately kicked off without any problems. So, back to Chrome – it turns out there was a question on Stack Overflow asked some time back regarding this very issue, which led me to news that Chrome had apparently fixed this in version 26 (unstable at the time of this writing).

Opening up Canary, I was pleased to see that my pseudo-elements were indeed being animated. Nice work Chrome! This was the first issue tonight where Internet Explorer 10 was working as expected, and Chrome was not. Next I noticed the pseudo-element bleeding out over the rounded edges of the parent; that’s not supposed to happen when you’ve got overflow:hidden set – right?

Back to Internet Explorer 10, I confirm that overflow:hidden does as it advertises, and the pseudo-element is not visible outside of its parents rounded corners – way to go Internet Explorer 10! But I still needed an unequivocal demonstration of this bug to confirm if Chrome was indeed busted, and misbehaving. That demo is now available online. As of today (January 20, 2013), this demo is broken in all versions of Chrome, but working in Internet Explorer 10 and Firefox.

So what’s the story here? The take-away is that Internet Explorer is no longer the browser it used to be. It’s a fully-qualified modern browser capable of some really killer things. It is well-built, and carries as much respect for standards as its competition. Chrome, on the other hand, did not come down to us from the gods of Mount Chrominus. It too is flawed in some ways, while brilliant in others.

Jumping on the one-browser-to-rule-them-all bandwagon doesn’t help the developer’s plight, it worsens it. Advocate standards, not browsers. Get behind good practices such as progressive-enhancement, feature-detection, and when necessary polyfills. Don’t champion a browser, champion the web.

Sometimes Chrome is the broken browser – it happens.


CSS3 Marquees

I came across an old question on Stack Overflow that I answered years ago. It was asking how to scroll text, vertically, in a marquee-like fashion. At the time, I had answered that you would need to use Flash, or JavaScript/jQuery. I thought to myself, you would most certainly need to animate the top value of a positioned element, or maybe increase a negative margin or something.

Well, we live in a very different world today. And today, we have access to @keyframes, animation, and so much more in modern browsers. This got me wondering what type of simple marquee I could whip up in an instant (wound up playing with this for a little over an hour). What I came up with is posted below.

Take some time to read through the embedded comments, and play with the code. If you have any questions please feel free to post them in the comments below and I’ll assist as I am able.



Use Vendor Prefixes Carefully

Vendor-prefixed properties and values in CSS are a beautiful thing; they allow you to do some really awesome stuff that would otherwise require a copy of Photoshop on your machine. In mere seconds you can whip up gradients, reflections, and even animations.

All of this is so appealing that people forget the fact that these are prefixed for a reason – they’re experiments, and should be viewed as such. That doesn’t mean you can’t use them, it just means you need to do so cautiously.

Don’t use these features in such a way that your page is severely broken without them – that’s not cool. Don’t use them in an unbalanced way either, providing prefixes for one browser but not for the others. And don’t use them in an anachronistic way, giving higher priority to prefixed experimental implementations over un-prefixed standardized implementations.

Often times I come across CSS that looks like this:

.gradientBox {
  background-image: linear-gradient(0deg, red, green);
  background-image: -moz-linear-gradient(0deg, red, green);
  background-image: -webkit-linear-gradient(0deg, red, green);

There are a few problems with this. For starters, only Firefox and Chrome/Safari are accounted for in the prefixes. It may very well be that Internet Explorer and Opera have provided an implementation under their own prefixes, -ms- and -o- respectively. You should provide for those as well.

Additionally, the unprefixed (which would be the standardized implementation) property is at the top, meaning it will be evaluated first, followed by the others. This means if you visit the page in a browser that supports both a -moz- prefix, and it’s unprefixed alternative, the un-prefixed implementation will be tossed out in favor of the experimental prefixed version – this isn’t cool.

So how should it look?

.gradientBox {
  background-image: -webkit-linear-gradient(0deg, red, green);
  background-image: -moz-linear-gradient(0deg, red, green);
  background-image: -ms-linear-gradient(0deg, red, green);
  background-image: -o-linear-gradient(0deg, red, green);
  background-image: linear-gradient(0deg, red, green);

The browser evaluates the rules from top to bottom. Internet Explorer will ignore any -webkit- properties, as well as any -moz- or -o- properties. If it supports the -ms- prefix, it will implement this rule. When it comes to the un-prefixed version, it will determine whether that value is supported or not. If it is not, it will be ignored, and the -ms- implementation will remain. Otherwise, the un-prefixed version will take precedence over the prefixed version. The same flow exists for the other major browsers.

But that’s a lot of writing, right? Fortunately we have a lot of great tools today that can reduce the effort on your page. From pre-processors like SASS and Compass to entire editors like Microsoft’s Visual Web Developer 2012, many options exist that will fill-out these values for you. We even have a great JavaScript option in -prefix-free which lets you write only the un-prefixed version – it will determine the appropriate prefix (if necessary) and insert it when the page is loaded.

Bottom line is this, we can enjoy these experimental features without breaking the web for people. If you’re going to use prefixed properties and values in CSS, do so wisely.

Free Web-Development Course

I have recently started offering a free web-development course both in person and online. The title of this meetup is “Home-Brew Web Development” and it takes place every Thursday evening at 7:30pm Easter Standard Time (unless otherwise stated). Each meetup is streamed live online at http://learn.sampson.ms, and archived on YouTube immediately afterwards.

We will start by learning HTML, followed by CSS. Eventually we will get into more advanced topics but I plan to keep it simple for now. This course is intended for those who have zero (or very little) experience in web-development, but are interested in it as a hobby, or a career-opportunity.

If you would like to receive regular updates regarding this course, future meetings, and more, you may subscribe to the newsletter via the link below. If at anytime you wish to stop receiving emails and stop attending the meetup, you’re totally free to go – it’s your call 🙂

Subscribe: http://eepurl.com/u_A61
Episodes: http://www.youtube.com/playlist?list=PL3IYnZmsleiXRVk1G-dcX4AJ_9kcSIO99

Of Dice, Dabblet, and CSS

I discovered dabblet.com some time back and never really spent much time there. Don’t get me wrong, it made a great first impression, however I am not super-talented in the CSS department, and it seems to be a tool for those who are.

I decided to return this evening and try my hand at creating some dice with nothing but CSS. I recently became a Potentate of the Rose, so this was a relevant and timely interest. After a couple hours of distracted back-and-forth, I finally had something pretty attractive.

While I don’t consider myself much of a CSS power-house, dabblet.com made the tedious process of building these die super-fun and very palatable. If you find yourself giving dabblet a run (and I suggest you do it), be sure to thank @leaverou for all of her hard work on such an awesome tool.

The final result can be seen here, or in the framed demo below.