Erie Design Partners
  • Adapting To A Responsive Design (Case Study)


    This is the story of what we learned during a redesign for our most demanding client — ourselves! In this article, I will explain, from our own experience of refreshing our agency website, why we abandoned a separate mobile website and will review our process of creating a new responsive design.

    At Cyber-Duck, we have been designing both responsive websites and adaptive mobile websites for several years now. Both options, of course, have their pros and cons. With a separate mobile website, you have the opportunity to tailor content and even interactions to the context of your users, whereas a responsive website means better content parity for users and a single website to maintain.

    Why Adapt To A Responsive Design?

    Our redesign story starts in August 2012. Until then, our previous strategy of having separate mobile, tablet and desktop websites didn’t exactly perform badly; they drove conversions, and user engagement appeared to be good relative to our desktop website. I should mention that this strategy was borne purely out of the need to quickly tailor our ageing desktop website to the increasing number of tablet and mobile users at the time.

    Our old separate mobile and desktop websites
    We used jQuery Mobile to create our previous mobile-optimized website as a quick fix for the increasing number of mobile users on our ageing desktop website.

    We produced our tablet and mobile websites specifically with users of these devices in mind — performance was our top priority. We wanted to improve on the loading time of our “desktop” website dramatically; the desktop home page was 2.2 MB, with 84 HTTP requests, and the mobile home page was still quite large, at 700 KB, with 46 HTTP requests. We had also designed the interfaces specifically with touch in mind, using jQuery Mobile to enhance the user experience with touch gestures.

    Changing Our Approach

    Despite this, several factors led us to decide that this approach was no longer sustainable for our own website:

    • having to support multiple code bases,
    • content management,
    • the emergence of new mini-tablets and “phablets.”

    The first two were not ideal, but at least manageable. The third, however, was a deal-breaker. OK, so we could have designed a website optimized for mini-tablets, but with so many more Web-enabled devices of all shapes and sizes entering the market every day, it would have been only a matter of time before we needed to think about optimizing for new form factors.

    We wanted our new website to be easier to maintain and more future-friendly for the inevitable influx of new form factors.
    We wanted our new website to be easier to maintain and more future-friendly for the inevitable influx of new form factors.

    It was at this point that we decided to completely overhaul all three websites and create a responsive design that would provide the best possible experience to all of our users, regardless of how they accessed our website.

    Setting Goals for the Responsive Design

    At the very start of this overhaul, we set ourselves some simple goals, or principles if you like, that we wanted to achieve with our responsive design:

    1. Speed
      Performance affects everyone.
    2. Accessibility
      It should work with no styles, backgrounds or JavaScript.
    3. Content parity
      The same content and functionality should be on all platforms.
    4. Device-agnostic
      Leave no platform behind.
    5. Future-friendly
      Cut down on maintenance.

    Based on these goals, our starting point for the design was to review our existing mobile website and to use it as a base for our responsive design. We explored how we could enhance for wider screens, rather than attempt to squeeze our previous desktop website down to mobile.

    We started by speaking to some of our trusted customers about what they liked about our website, what they didn’t really like, and what was important to them when searching for a digital agency.

    We also used analytics data from our previous website, using a mixture of Google Analytics, Lead Forensics and CrazyEgg to help us better understand what existing users wanted and needed from our website. As a result, we were able to streamline and prioritize a content strategy based on how our users actually interact with the website.

    Our design team used card-sorting exercises to help organize our existing content for the new website
    Our design team used card-sorting exercises to reorganize our existing content for the new website.

    Making Performance A Priority

    A potential pitfall of responsive Web design, which you don’t find with a separate mobile website, is that performance can suffer, especially if you are simply hiding content using display: none at certain screen widths. We wanted to avoid this issue by putting the speed of our website at the heart of all design and technology decisions. The advantage is that a better performing website would benefit all users, not just mobile users.

    To achieve this, we set a performance budget — a set of targets to improve the speed and size of our new website. For mobile users, we wanted a website that performed at the very least comparably to our existing mobile website; so, we wanted to load no more than 40 HTTP requests and 500 KB of data for our mobile breakpoint. (This was just the start. Our next step was to reduce this to less than 100 KB.)

    Third-Party Scripts

    The easiest way to trim the fat was to strip down third-party scripts as much as possible. According to Zurb, “to load the Facebook, Twitter and Google social media buttons for a total of 19 requests takes 246.7 KB in bandwidth.” As a result, we replaced heavy social-media plugins with lightweight social media links.

    Replacing heavy third-party social buttons with simple links can significantly reduce HTTP requests and page-loading times.
    Replacing heavy third-party social buttons with simple social media links can significantly reduce HTTP requests and page-loading times.

    While some essential tracking scripts had to stay, we ensured that they would load after the content by putting them at the bottom of the body element in the HTML document and in an external scripts file.

    Did We Really Need A CMS?

    Early on in discussing the requirements for the new website, we considered whether we even needed a content management system (CMS). After all, as you’d expect in a digital agency, most of the team members are familiar with HTML, CSS and Git, so we could certainly manage our content without a CMS.

    By using server-side performance-monitoring tools such as New Relic, we could see that our previous CMS was a key factor in the slow page-loading times. Thus, we took the fairly drastic decision to entirely remove the CMS from our website. We made an exception for our blog, which, due to the volume and frequency of content being published, still required a CMS to be managed effectively.

    The previous website queried the database server 1,459 times with a total execution time of 2.34 seconds
    The previous home page queried the database server 1,459 times, for a total execution time of 2.34 seconds.

    Our old website was built with a model-view-controller (MVC) architecture that connected with the WordPress CMS. To give you an example, a typical page with WordPress uses around 600 to 1,500 queries to load; the database server is queried hundreds of times, and by simply removing the CMS, we managed to reduce this to zero in one fell swoop.

    The team developed early prototypes to see how we could improve performance and responsiveness.
    The team developed early prototypes to see how we could improve performance and responsiveness.

    By removing the CMS for static pages, we eliminated the need for a database and dynamic templates. Using the popular PHP framework Laravel, we implemented a custom “dynamic route and static template” system. This means that each time a URL is called on our website, the Laravel router knows exactly which template to load by matching the URL to the template’s name, and the template already has all of the content laid out statically in HTML.

    As a result of this alone, we managed to improve the processing speed of the website by over 3,900%. Taking the home page as an example, we improved server processing speeds from 2.2 seconds to 56 milliseconds on average.

    Server execution speed is now only 56 milliseconds with zero database queries, approximately 40 times faster than before.
    Server processing speed is now only 56 milliseconds, with zero database queries — approximately 40 times faster than before.

    Naturally, this approach wouldn’t suit everyone (nor indeed many of our clients), but we should ask ourselves at the beginning of each project which CMS is most suitable, and whether one is necessary at all. Other options are out there, of course, including file-based CMS’ such as Kirby and Statamic, building or customizing a lightweight CMS such as Perch, or simply implementing better server-side caching such as with Varnish.

    Ultimately, we decided to remove the CMS because even the most lightweight, highly optimized CMS with clever caching has overhead and cannot match the performance and server footprint of static files.

    Avoiding Off-The-Shelf CSS Frameworks

    While CSS frameworks such as Twitter Bootstrap and Foundation are great for quickly building interactive prototypes, they are often far more complex than we need for most projects. The reason is that these frameworks need to be sensitive to and cater to a wide variety of use cases and are not tailored to the particular requirements of your project.

    We reduced the size of our style sheets by creating a custom responsive grid system that was simple, fast and extremely flexible to our needs.

    We designed from the content out, meaning that the content shaped the layout and grid, as opposed to having the layout define the content.

    Clockwise from top: The layout is three columns on a desktop, becomes a single column stack on mobile, and takes advantage of the extra space on tablets by floating the image to the left of the content.
    Clockwise from top: The layout is three columns on a desktop, becomes a single column stack on mobile, and takes advantage of the extra space on tablets by floating the image to the left of the content.

    @media only screen and (min-width: 120px) and (min-device-width: 120px) {
       // Uses mobile grid
       .container {
          width: 100%;
       .col12, .col11, .col10, .col9, .col8, .col7, .col6, .col5, .col4, .col3 {
          width: 92%;
          margin: 0 4% 20px 4%;
       .col2 {
          width: 46%;
          float: left;
          margin: 0 4% 20px 4%;
    @media only screen and (min-width: 600px) and (min-device-width: 600px) {
       // Uses custom grid to accomodate content
       .home-content {
          article {
             width: 92%;
             clear: both;
             margin: 0 4% 20px 4%;
          .image {
             float: left;
             width: 40%;
          .text {
             float: left;
             width: 50%;
             margin-left: 5%;
             .btn {
                @include box-sizing(content-box);
                width: 100%;
    @media only screen and (min-width: 1024px) and (min-device-width: 1024px) {
       // Uses regular desktop grid system
       .container {
          margin:0 auto;
       .col4 {
          width: 300px;
          float: left;
          margin: 0 10px;

    We used Sass for the front-end development to avoid any repetition of code, making sure every bit of CSS is actually being used. Sass can also minify the output to ensure that the CSS is a small as possible.

    $sass --watch --style compressed scss:css

    We also made use of functions within Sass to build our custom grid. Here is the code for the desktop grid:

    @import "vars";
    // Grid system
    $wrap: $col * 12 + $gutter * 11;
    @for $i from 2 through 12 {
       .col#{$i} {
          width: $col * $i + $gutter * $i - $gutter;
          float: left;
          margin: 0 $gutter/2 $vgrid $gutter/2;
    @for $i from 1 through 11 {
       .pre#{$i} {
          padding-left: $col * $i + $gutter * $i;
    @for $i from 1 through 11 {
       .suf#{$i} {
          padding-right: $col * $i + $gutter * $i;
    .container {
       width: $wrap + $gutter;
       margin: 0 auto;
       padding-top: 1px;
    .colr {
       float: right;
       margin: 0 $gutter;
    .alpha {
       margin-left: 0;
    .omega {
       margin-right: 0;

    From here, we could customize the width of columns and gutters within the grid simply by editing the vars configuration file.

    // Grid
    $vgrid:      20px;
    $col:        60px;
    $gutter:     20px;

    The grid basically calculates the width of a span of columns based on the number of columns in that span, making it flexible to any configuration of layout or grid. We’ve open-sourced this code on GitHub (we make no apologies for the duck puns), so please fork and adapt this flexible grid system to your own project’s requirements — and let us know how it goes!

    Conditionally Loading JavaScript

    To further improve the speed of our new website, we wanted to load JavaScript only when it’s needed or supported. We achieved this by using RequireJS to ensure that JavaScript is loaded only after checking that JavaScript is available in the requesting browser and that the browser only loads scripts it can support. RequireJS also works as a module loader, ensuring that any JavaScript is called only if it’s needed on that page.

    RequireJS also contains a handy optimization tool that combines related scripts and minifies them via UglifyJS to reduce the file size of the JavaScript.

    The optimization reduced the JavaScript’s size from 411 KB to 106 KB.
    The optimization reduced the JavaScript’s size from 411 KB to 106 KB.

    Optimizing Image Assets

    In addition to JavaScript, images are among the heaviest assets to download for most websites. We particularly wanted to improve on this area because our website is fairly image-heavy, showing examples that showcase our work.

    We manually optimized images throughout the website by selectively compressing areas of images using Adobe Fireworks’ selective quality options. We also reduced image file sizes through further granular control of compression, blur and desaturation.

    By de-saturating and blurring parts of images that are not essential we significantly reduced image sizes.
    By desaturating and blurring parts of images that are not essential, we significantly reduced image sizes.

    We also used ImageOptim and TinyPNG to compress our images and sprites. These tools remove all unnecessary data without compromising the quality of an image. This reduced the weight of the main image sprite, for instance, from 111 KB to 40 KB.

    For the slideshow banner on the home page, we optimized for different screen sizes by using media queries to ensure that only appropriate-sized images are loaded.

    On mobile, the slideshow items are far lighter

    As you can see in the image above, on mobile, the slideshow items are far lighter.

    The CSS:

    @media only screen and (min-width: 120px) and (min-device-width: 120px) {
       .item-1 {
          background: $white url('carousel/dmd/background-optima-m.jpg') 50% 0 no-repeat;
          .computer, .tablet, .phone, .eiffel, .bigben, .train {
             display: none;
       /* Total loaded: 27 KB */

    Meanwhile, on the desktop, we load more assets to make the most of the larger screen size available to us.
    More assets are loaded on the desktop.

    Meanwhile, on the desktop, we load more assets to make the most of the larger screen size available to us.

    The CSS:

    @media only screen and (min-width: 1024px) and (min-device-width: 1024px) {
       .item-1 {
          background: $white url('carousel/dmd/background.jpg') center -30px no-repeat;
          .computer {
             background: url('carousel/dmd/computer.png') center top no-repeat;
             div {
                background: url('carousel/dmd/sc-computer.jpg') center top no-repeat;
          .tablet {
             background: url('carousel/dmd/tablet.png') center top no-repeat;
             div   {
                background:  url('carousel/dmd/sc-tablet.jpg') center top no-repeat;
          .phone {
             background: url('carousel/dmd/phone.png') center top no-repeat;
             div {
                background: url('carousel/dmd/sc-mobile.jpg') center top no-repeat;
          .eiffel {
             background: url('#{$img}carousel/dmd/eiffel.png') center top no-repeat;
          .bigben {
             background: url('#{$img}carousel/dmd/bigben.png') center top no-repeat;
          .train {
             background: url('#{$img}carousel/dmd/train.png') center top no-repeat;
       /* Total loaded: 266 KB */

    Delivering Content Faster

    Yahoo’s golden rule of performance states that “80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc.” In short, each request takes time to process; therefore, each request (such as to serve a file from the server) will inevitably increase the loading time.

    By using CloudFlare’s content delivery network (CDN), we have separated the file-serving task of the Web server from the processing of the website. This means that our Web server concentrates on the application, rather than on serving static files. We moved all static assets to a separate subdomain (in our case, to reduce the cookies being sent with each request for an asset to a minimum, which in turn reduces the bandwidth required for each asset.

    The CDN also caches and ensures that files are delivered from the server nearest to the user’s location, minimizing network latency (because the data is transmitted over a shorter distance), further reducing loading times.

    In addition to the CDN, we used the Gzip rules and expires headers in the .htaccess file of HTML5 Boilerplate. This uses Apache’s mod_deflate module to compress the output of files to the browser and also sets an expiration on headers far into the future, to ensure better caching of the website for returning visitors.

    Creating A Truly Responsive Design

    As set out in our initial goals, we wanted our new website to have content parity and to provide accessibility to all users, regardless of how they access it.

    In order to deliver a truly responsive design, we delegated all styling and display tasks to the CSS alone, using JavaScript to simply alter the “status” of elements by adding and removing CSS classes, as opposed to hiding and showing the elements with JavaScript directly.

    The Right Code for the Task

    Using this method, we could make mobile-specific optimizations, such as transforming the top menu on mobile to have telephone and map buttons so that mobile visitors can call or find our office quickly.

    We used this approach throughout the website to activate and deactivate dynamic elements, always ensuring that these elements are still present on the page when JavaScript is unavailable. This way, we can offer content parity to our users while avoiding duplicate markup for specific contextual enhancements, such as those for mobile. With this approach, we ensure that JavaScript is an enhancement to the user experience, rather than a necessity to view the website.

    On the right side of the top GUI, you can see the map and phone buttons, accompanied by the standard control to access the rest of the pages.
    On the right side of the top GUI, you can see the map and phone buttons, accompanied by the standard control to access the rest of the pages.

    Here is the JavaScript:


    The CSS for desktops:

    .nav {
       display: block;
       float: right;
    .btn-menu, .btn-call, .btn-map {
       display: none;

    The CSS for mobile:

    .menu {
       display: block;
       height: auto;
       overflow: hidden;
    .menu.closed {
       height: 0;
    .btn-menu, .btn-call, .btn-map {
       display: block;

    Animations as an Enhancement

    For the animated slideshow of our projects on the home page, we used SequenceJS, a plugin that gave us the freedom to create the slideshow using only HTML and CSS for the content. This way, whenever JavaScript is unavailable or the screen size is too small, we don’t have to download all assets for the animation, only those necessary for a smaller, lighter version.

    Elsewhere, we decided to use CSS3 for animations. These enhance the user experience for browsers that support CSS3 animations, while older browsers still get the functionality, if not the eye candy. For example, when a user is on a latest-generation smartphone and expands the menu or a portfolio item, it animates with CSS3 rather than with JavaScript.

    This improves the performance of these animations by using hardware acceleration, offloading tasks of the central processing unit (CPU) to the graphics processing unit (GPU). For smartphone and tablet users, this can make a massive difference to performance by reducing consumption of their already limited CPU resources.

    Delegating animation to the CSS enables us to make the most of hardware acceleration.

    .menu {
       height: auto;
       transition: height 200ms linear;
    .menu.closed {
       height: 0;
       transition: height 200ms linear;

    Breakpoints Based on Content and Design, Not Device

    For the breakpoints, we used multiple CSS media queries to responsively deliver the optimal presentation of content to screens both large and small.

    This device-agnostic approach ensures that we do not need to optimize the code later when other devices come to market. We included (though did not limit) breakpoints at 120, 240, 600, 760, 980 and 1360 pixels, as well as targeted media queries for specific content on pages and also high-pixel-density screens.

    The website responds fluidly between each breakpoint.
    The website responds fluidly between each breakpoint.

    While we did not design breakpoints based on particular devices, in order to ensure further future-friendliness, we did test our website across as many devices and browsers as we could get our hands on, from the common (desktop browsers and a variety of phones and tablets) to the uncommon (Lynx, Playstation 3, Kindle Paperwhite, PSP Vita and others). We even tested the website on old Nokia devices, where the website still performed well.

    Our designers and front-end team tested the new website on a wide variety of devices, including old models such as this Nokia X2.
    Our designers and front-end team tested the new website on a wide variety of devices, including old models such as this Nokia X2.

    Being More Accessible

    Our responsibility as Web designers and developers is not only to make our websites more accessible, but also to educate our clients and colleagues about why they should care.

    Below are a couple of quick wins for accessibility that we applied to our website.


    • Text is legible against backgrounds, with a contrast ratio of 3:1 for headings and 4.5:1 for body text.
    • The text is structured with appropriate headings and in a meaningful order, and it describes the topic or purpose of the content.
    • Text can be resized without losing content or functionality.


    • The purpose of all links is made clear with descriptive text and, when that isn’t practical, with alternative text.
    • The first link on every page bypasses the navigation to move straight to the content. This is hidden by default in a standard browser but is accessible in appropriate scenarios.
    • Page addresses (i.e. URLs) are human-readable and are permanent wherever possible.
    • We implemented access keys for quick navigation to important pages and features.

    Here is the HTML for the “skip” navigation link:

    <a href="#content" title="Skip to content" accesskey="s" class="btn-skip">Skip navigation</a>

    And the CSS:

    .btn-skip {
       position: absolute;
       left: -9999px;


    • All content images have alternative text (with the alt attribute), which is shown where images are disabled or not supported.
    • Content is accessible and understandable when images are disabled or not supported.


    • All videos hosted on YouTube have captions (subtitles) if they include spoken words.


    • All form controls and fields are properly and clearly labelled.
    • Form inputs have been assigned types and attributes so that the correct keyboard is loaded on touchscreen devices.
    • All crucial form fields are checked for errors when the form is submitted.
    • Any error found is described to the user in text, along with suggestions on how to correct the error.
    • All forms have an appropriate focus order so that they can be navigated with the Tab key on the keyboard.
    • All forms can be submitted using the “Return” or “Enter” key.

    Using the proper input types and attributes, such as required and placeholder, is easy and makes the form more accessible.

    <input type="email" id="email" name="email" value="" required="" placeholder="Pop your email address in here">

    Just Getting Started

    Since we launched our new website a couple of weeks ago, the results have been impressive. Mobile traffic has increased by over 200% (with an 82% increase on average for all traffic); the average duration of a visit is up by 18%; and the exit rate on the home page for mobile users has decreased by over 4,000%. While statistics can tell us only so much, these indicate that the responsive website is performing better on mobile than our previous separate mobile website.

    According to Google Analytics, server-response times have decreased from an average of 1.21 seconds to 170 milliseconds. Similarly, page-loading times have decreased from an average of 9.19 seconds to 1.82 seconds.
    According to Google Analytics, server-response times have decreased from an average of 1.21 seconds to 170 milliseconds. Similarly, page-loading times have decreased from an average of 9.19 seconds to 1.82 seconds.

    The important thing to remember here is that this is just the beginning. We know we can improve in some areas: pushing performance optimization much further, reducing file sizes, being more future-friendly with touch gestures across all breakpoints, using server-side solutions such as adaptive images for further contextual enhancement, conforming more closely to the Web Content Accessibility Guidelines’ “AA” standards.

    Going responsive is just the first step for our website.
    Going responsive is just the first step for our website.

    At 2012’s inaugural Smashing Conference, Brad Frost quoted Benjamin Franklin, who said, “When you are finished changing, you’re finished.” For anyone working in the Web industry, this statement will particularly ring true. We work in a medium that is both rapidly and constantly evolving. Keeping up to date with this ever-changing landscape is a challenge, but it’s what makes working with the Web so fantastic and exciting.

    We see the launch of our new website as the first improvement of many in our quest for a truly responsive design — and we can’t wait to see where it takes us.

    (al) (ea)

    © Matt Gibson for Smashing Magazine, 2013.

  • Five Ways To Prevent Bad Microcopy


    You’ve just created the best user experience ever. You had the idea. You sketched it out. You started to build it. Except you’re already in trouble, because you’ve forgotten something: the copy. Specifically, the microcopy.

    Microcopy is the text we don’t talk about very often. It’s the label on a form field, a tiny piece of instructional text, or the words on a button. It’s the little text that can make or break your user experience.


    If you think you’ve built the best user experience but didn’t make sure the microcopy was spot on, then you haven’t built the best user experience.

    With the adoption of agile development and lean UX, we’re often concerned about racing through iterations and getting our products in front of customers. But we can’t forget that design is still about words.

    Everyone frets about marketing copy — and they should — but communication doesn’t stop once you’ve sold the user. In some ways, you could argue that words become more important once the marketing experience is done. With most products, users have to be sold to only once — or once in a while — and then they’ll use the core product all the time.

    If your microcopy isn’t getting the job done, you’ll lose users — and all the marketing in the world might not get you a second chance.

    With that in mind, here are five ways to make sure your website’s microcopy doesn’t end up sinking your UX.

    1. Get Out Of Your Own Head And Get To Know The User.

    I’m willing to bet that your experience is plastered with internal terminology, especially your labels and navigation. Every company has its own language, which often sneaks onto the website when we’re not careful.

    Don’t let it happen. Never assume that what works for you will work for the user.

    Here’s a simple way to check whether your microcopy is too internal — or confusing, for that matter.

    Let’s assume that you’re running some form of usability testing. (If you’re not, there’s only about a thousand articles out there that will convince you you’re making a mistake, so you don’t need me for that.)

    When you’re testing, you probably get caught up in watching how the user interacts with your website and their facial expressions. But instead of simply watching, make a point to really listen to — and take notes on — the actual words the user says during testing. Listen closely to the phrases they utter when describing their actions. After all, you’ve told them to think out loud.

    Listen to the inflection in their voice as they read microcopy: Did they say that label or term with a question in their voice? Don’t hesitate to have your moderator follow up on copy. Have them go back and ask the user whether they’ve understood that label.

    Take it a step further: Listen to what users say from the moment they walk in the building. Listen to their banter with the moderator, the jokes they make and the words they use to express their frustration or enjoyment.

    You’d be surprised by what you can learn about a user and their language set from a comment they make about a cup of coffee. Everything someone says tells you something about them and can inform your copywriting process.

    2. The User Is A Person. Talk To Them Like One.

    Because brevity is essential on the Web, most of us tend to truncate everything — particularly labels. Labels are great for design. They organize and keep tidy essential parts of a UI, such as navigation and forms.

    Unfortunately, labels have an inherent problem: They’re easily subject to a user’s personal context because they don’t provide explanation. They’re on an island in the user’s mind.

    Not too long ago, we encountered this problem with a label at TheLadders.

    TheLadders is a job-matching service. Like any matching service, we required information to match a user with the right job.

    TheLadders Job Goals

    We thought this form was very clear. “Job Goals” is the label we’ve used for our matching criteria for almost 10 years. It’s brief, which helped to keep the navigation neat. But in a recent redesign, we noticed that users kept stumbling when first arriving on the page.

    Turns out that people who don’t work in the job-search industry think of job goals as accomplishments they hope to achieve at their job, not as the details of their next job.

    (We also fell into the trap covered in the first point: internal terminology = bad.)

    So, we made it more conversational: “What job do you want?” Instantly, we could see that users no longer hesitated. Why? Because taking this new line of copy out of context was impossible.

    Instead of forcing a label on a form or field for the sake of the UI, use natural language. The experience should be a conversation with the user, not a filing cabinet for them to drudge through.

    Most of all, the labels in the navigation shouldn’t be more important than the user’s interaction with the pages that the labels represent.

    3. Use Copy As A Guide, Not A Crutch.

    “We can fix that with copy.”

    I’ve heard this too many times when the UX falls short, and I hate it. If there’s a problem with the design, then fix the design. The best experiences have minimal copy because they’re intuitive. When designing the UX and you find yourself writing a sentence or two to help the user take an action, step back.

    Tests have been conducted on readability and on the optimum length of content for understandability since the 1880s. With the rise of the Internet, this story became about line length. Most sources net out between 45 to 75 characters as the ideal line length.

    To me, line length is moot, especially with responsive and mobile design. Besides, character counts seem tedious and not very lean.

    Instead, I subscribe to the original readability tables of Rudolf Flesch (pictured below), in which sentences with eight words or fewer are regarded as “very easy” to read.

    The readability tables of Rudolf Flesch.

    It may be an old standard, but it still may be the best measuring stick we have, and it’s the easiest for lean teams to follow. On the Web, we’re shooting for “very easy to read” every time, and we want to be able to communicate with as many people (93%) as possible.

    If you can’t explain what a user needs to do in eight words or fewer, then reconsider the design.

    Once the user has gotten past the marketing portion of the experience, use copy as a guide to usher them through the product. The best copy on basic UI features, such as a form, will get read but not really noticed. The user absorbs the words and takes the desired action without a hiccup.

    4. Treat Every Moment Like A Branding Moment, Even When It’s Not.

    There are multiple definitions of a “branding moment.” When we talk about copy in a UX, I define it as a moment when you purposefully inject your brand’s tone and voice into what would normally be a straightforward user interaction.

    For example, Foursquare has a lot of great branding moments within its badging system. I unlocked the one below not too long ago. It’s fun and a bit edgy, on point with Foursquare’s brand.

    A good job of a branding moment with Foursquare’s brand.

    But getting carried away is easy. Think hard before using fun or quirky — or whatever your brand’s voice is — copy in a situation that the user wants and expects to be straightforward.

    Your brand’s tone and voice are essential to consider when writing all of your copy, but it should not get in the way of a user who is trying to take action.

    Avoid over-branding copy on:

    • navigation,
    • forms and field labels,
    • instructional text,
    • selection text (drop-downs, radio buttons),
    • buttons.

    Consider incorporating your brand’s voice in:

    • confirmation messaging,
    • rewards (badges, points),
    • 404 pages,
    • server errors,
    • error messaging.

    The difference between these lists is simple. In the first list, the user is attempting to take action; the second list is the results of actions.

    In the first list, you don’t want to risk confusing users as they try to accomplish something and cause them to abandon. Clarity is essential.

    In the second list, you have an opportunity to embrace the user’s success (Foursquare’s “You’re on fire!”) or mitigate a failure (TheLadders 404 page, below) by injecting your brand. You don’t need anything from the user at these points.

    TheLadders 404 page.

    This isn’t to say that you can’t brand that first list. But if you’re going to do it, test it first. With branding moments, execution is paramount. If you’re unsure, don’t risk it.

    By choosing not to brand parts of the experience to keep it simple and easy for the user, you’ll provide an enjoyable experience, which will make your brand stronger. So, every moment is a branding moment. Even when it’s not.

    5. If Content Is King, Then Treat Context Like A Queen.

    The hot saying right now is “Content is king.” Native advertising, or the integration of relevant content into a natural experience for the purpose of acquisition, is becoming a core offering of many agencies and has spawned a few popular startups.

    But without context, content is useless. (And if you’re big on Game of Thrones, then you’ll know that queens have all the real power!)

    Whether you’re labeling a form or writing a blog post, you have to either understand the user’s existing context or provide context for them.

    A user’s context will define how they interpret the copy on the page. That context could come from anywhere: an email they’ve just read, or something that happened to them when they were eight.

    When a user doesn’t have proper context, they get confused. When a user gets confused, they abandon.

    If you’re agile and iterative, accounting for a holistic experience adds an additional layer of complexity, in the form of consistency. A simple change to copy on one page could affect 10 other pages. One minute you’re calling something “Job Goals,” and the next you’re changing it to “What job do you want?” Well, where else have you used “Job Goals”?

    To better understand the user’s context — and to check for consistency — sit down at least once per iteration and experience your “contextual flow” as the user sees it.

    For example, if you have a subscription service, the flow might be something like this:


    That’s at least 10 distinct steps in which a user’s context could be created, confirmed or altered.

    Sit down, take a breath and wipe your mind of what you know is there. Then start with Google or your home page or wherever the first touch usually happens.

    Does your onboarding experience deliver the same promise as your Google ad? Have you described a feature using the same language throughout? Are your labels so subjective that the context gets lost? These are questions to answer as you go through the flow.

    Whatever Happens, Don’t Ignore Your Microcopy.

    Microcopy often falls victim to personal bias, internal terminology, poor branding, broken contextual flows, time crunches and other factors. Any of these can undermine even the most well-designed UX and the copy within.

    Here’s the thing about mistakes with microcopy: They’re so easy to make yet so hard to identify after you’ve made them.

    You have a much better chance of stopping the mistakes in advance than of identifying them after the fact. When you’re testing, how often do you think, “Hey, maybe we should change the label on the third field of this form?” You’re wrapped up in other UX mistakes that you know you’ve made. Unfortunately, a repeated pattern of noticeable failure is usually needed in order for microcopy to get updated or even tested.

    So, the next time you’re creating or improving an experience, I hope you employ some of the tactics provided here so that you avoid these “easy” mistakes and do right by your microcopy — and by your user.


    © Bill Beard for Smashing Magazine, 2013.

  • 200 Foodie Pack: A Free Set Of Food Icons


    Today we are pleased to feature a set of 200 useful and beautiful foodie icons. This freebie was created by the team behind Freepik, and at the time of writing it’s the largest set of food icons available on the web in one pack.

    The 200 Foodie Pack includes 200 customized icons available in PNGs (32×32px, 64×64px, 128×128px), as well as in AI, EPS and vector format. Perfect for any projects around gourmet, food, restaurant, gastronomy and the like. Enjoy!

    foodie icons_new font_500_mini
    Large preview

    Download The Freebie!

    You may freely use it for both your private and commercial projects without any restrictions, including software, online services, templates and themes.

    preview 3_500_mini

    preview 4_500_mini

    Behind The Design

    Here are some insights from the design team:

    “At Freepik we love to make freebies and to develop free icons sets for designers that make work easier. The pack was created to neutralize the growing demand for food icons.”

    Many thanks to the creative minds behind Freepik! We really appreciate your efforts!


    © The Smashing Editorial for Smashing Magazine, 2013.

  • What Leap Motion And Google Glass Mean For Future User Experience


    With the Leap Motion controller being released on June 27th and the Google Glass Explorer program already live, it is obvious that our reliance on the mouse or even the monitor to interact with the Web will eventually become obsolete.

    The above statement seems like a given, considering that technology moves at such a rapid pace. Yet in 40 years of personal computing, our methods of controlling our machines haven’t evolved beyond using a mouse, keyboard and perhaps a stylus. Only in the last six years have we seen mainstream adoption of touchscreens.

    Given that emerging control devices such as the Leap Controller are enabling us to interact with near pixel-perfect accuracy in 3-D space, our computers will be less like dynamic pages of a magazine and more like windows to another world. To make sure we’re on the same page, please take a minute to check out what the Leap Motion controller can do:

    Introducing the Leap Motion

    Thanks to monitors becoming portable with Google Glass (and the competitors that are sure to follow), it’s easy to see that the virtual world will no longer be bound to flat two-dimensional surfaces.

    In this article, we’ll travel five to ten years into the future and explore a world where Google Glass, Leap Motion and a few other technologies are as much a part of our daily lives as our smartphones and desktops are now. We’ll be discussing a new paradigm of human-computer interface. The goal of this piece is to start a discussion with forward-thinking user experience designers, and to explore what’s possible when the mainstream starts to interact with computers in 3-D space.

    Please note: We’re exploring an entirely hypothetical scenario, and these are opinions, some of which you may not agree with. However, the opinions are based on current trends, statistics and existing technology. If you’re the kind of designer who is interested in developing the future, I encourage you to read the sources that are linked throughout the article.

    Setting The Stage: A Few Things To Consider

    Prior to the introduction of the iPhone in 2007, many considered the smartphone to be for techies and business folk. But in 2013, you’d be hard pressed to find someone in the developed world who isn’t checking their email or tweeting at random times.

    So, it’s understandable to think that a conversation about motion control, 3-D interaction and portable monitors is premature. But if the mobile revolution has taught us anything, it’s that people crave connection without being tethered to a stationary device.

    To really understand how user experience (UX) will change, we first have to consider the possibility that social and utilitarian UX will be taking place in different environments. In the future, people will use the desktop primarily for utilitarian purposes, while “social” UX will happen on a virtual layer, overlaying the real world (thanks to Glass). Early indicators of this are that Facebook anticipates its mobile growth to outpace its PC growth and that nearly one-seventh of the world’s population own smartphones.

    The only barrier right now is that we lack the technology to truly merge the real and virtual worlds. But I’m getting ahead of myself. Let’s start with something more familiar.

    The Desktop

    Right now, UX on the desktop cannot be truly immersive. Every interaction requires physically dragging a hunk of plastic across a flat surface, which approximates a position on screen. While this is accepted as commonplace, it’s quite unnatural. The desktop is the only environment where you interact with one pixel at a time.

    Sure, you could create the illusion of three dimensions with drop shadows and parallax effects, but that doesn’t change the fact that the user may interact with only one portion of the screen at a time.

    This is why the Leap Motion controller is revolutionary. It allows you to interact with the virtual environment using all 10 fingers and real-world tools in 3-D space. It is as important to computing as analog sticks were to video games.

    The Shift In The Way We Interact With Machines

    To wrap our heads around just how game-changing this will be, let’s go back to basics. One basic UX and artificial intelligence test for any new platform is a simple game of chess.

    Virtual Chess
    (Image: Wikimedia Commons)

    In the game of chess below, thanks to motion controllers and webcams, you’ll be able to “reach in” and grab a piece, as you watch your friend stress over which move to make next.

    Now you can watch your opponent sweat.
    Now you can watch your opponent sweat. (Image: Algernon D’Ammassa)

    In a game of The Sims, you’ll be able to rearrange furniture by moving it with your hands. CAD designers will use their hands to “physically” manipulate components (and then send their design to the 3-D printer they bought from Staples for prototyping.)

    While the lack of tactile feedback might deter mainstream adoption early on, research into haptics is already enabling developers to simulate physical feedback in the real world to correspond with the actions of a user’s virtual counterpart. Keep this in mind as you continue reading.

    Over time, this level of 3-D interactivity will fundamentally change the way we use our desktops and laptops altogether.

    Think about it: The desktop is a perfect, quiet, isolated place to do more involved work like writing, photo editing or “hands-on” training to learn something new. However, a 3-D experience like those mentioned above doesn’t make sense for social interactions such as on Facebook or even reading the news, which are more becoming of mobile.

    With immersive, interactive experiences being available primarily via the desktop, it’s hard to imagine users wanting these two experiences to share the same screen.

    So, what would a typical desktop experience look like?

    Imagine A Cooking Website For People Who Can’t Cook

    With this cooking website for people who can’t cook, we’re not just talking about video tutorials or recipes with unsympathetic instructions, but rather immersive simulations in which an instructor leads you through making a virtual meal from prep to presentation.

    Interactions in this environment would be so natural that the real design challenge is to put the user in a kitchen that’s believable as their own.

    You wouldn’t click and drag the icon that represents sugar; you would reach out with your virtual five-fingered hand and grab the life-sized “box” of Domino-branded sugar. You wouldn’t click to grease the pan; you’d mimic pushing the aerosol nozzle of a bottle of Pam.

    The Tokyo Institute of Technology has already built such a simulation in the real world. So, transferring the experience to the desktop is only a matter of time.

    Cooking simulator will help you cook a perfect steak every time

    UX on the future desktop will be about simulating physics and creating realistic environments, as well as tracking head, body and eyes to create intuitive 3-D interfaces, based on HTML5 and WebGL.

    Aside from the obvious hands-on applications, such as CAD and art programs, the technology will shift the paradigm of UX and user interface (UI) design in ways that are currently difficult to fathom.

    The problem right now is that we currently lack a set of clearly defined 3-D gestures to interact with a 3-D UI. Designing UIs will be hard without knowing what our bodies will have to do to interact.

    The closest we have right now to defined gestures are those created by Kinect hackers and John Underkoffler of Oblong Technology (the team behind Minority Report’s UI).

    In his TED talk from 2010, Underkoffler demonstrates probably the most advanced example of 3-D computer interaction that you’re going to see for a while. If you’ve got 15 minutes to spare, I highly recommend watching it:

    John Underkoffler’s talk “Pointing to the Future of UI

    Now, before you start arguing, “Minority Report isn’t practical — humans aren’t designed for that!” consider two things:

    1. We won’t likely be interacting with 60-inch room-wrapping screens the way Tom Cruise does in Minority Report; therefore, our gestures won’t need to be nearly as big.
    2. The human body rapidly adapts to its environment. Between the years 2000 and 2010, a period when home computers really went mainstream, reports of Carpal Tunnel Syndrome dropped by nearly 8%.

    Graph of Carpel Tunnel
    (Image: Minnesota Department of Health)

    However, because the Leap Motion controller is less than $80 and will be available at Best Buy, this technology isn’t just hypothetical, sitting in a lab somewhere, with a bunch of geeks saying “Wouldn’t it be cool if…”

    It’s real and it’s cheap, which really means we’re about to enter the Wild West of true 3-D design.

    Social Gets Back To The Real World

    So, where does that leave social UX? Enter Glass.

    It’s easy to think that head-mounted augmented reality (AR) displays, such as Google Glass, will not be adopted by the public, and in 2013 that might be true.

    But remember that we resisted the telephone when it came out, for many of the same privacy concerns. The same goes for mobile phones and for smartphones around 2007.

    So, while first-generation Glass won’t likely be met with widespread adoption, it’s the introduction of a new phase. ABI Research predicts that the wearable device market will exceed 485 million annual shipments by 2018.

    According to Steve Lee, Glass’ product director, the goal is to “allow people to have more human interactions” and to “get technology out of the way.”

    First-generation Glass performs Google searches, tells time, gives turn-by-turn directions, reports the weather, snaps pictures, records video and does Hangouts — which are many of the reasons why our phones are in front of our faces now.

    Moving these interactions to a heads-up display, while moving important and more heavy-duty social interactions to a wrist-mounted display, like the Pebble smartwatch, eliminates the phone entirely and enables you to truly see what’s in front of you.

    (Image: Pebble)

    Now, consider the possibility that something like the Leap Motion controller could become small enough to integrate into a wrist-mounted smartwatch. This, combined with a head-mounted display, would essentially give us the ability to create an interactive virtual layer that overlays the real world.

    Add haptic wristband technology and a Bluetooth connection to the smartwatch, and you’ll be able to “feel” virtual objects as you physically manipulate them in both the real world and on the desktop. While this might still sound like science fiction, with Glass reportedly to be priced between $299 and $499 and Leap Motion at $80 and Pebble being $150, widespread affordability of these technologies isn’t entirely impossible.

    Social UX In The Future: A Use Case

    Picture yourself walking out of the mall, and your close friend Jon updates his status. A red icon appears in the top right of your field of vision. Your watch displays Jon’s avatar, which says, “Sooo hungry right now.”

    You say, “OK, Glass. Update status: How about lunch? What do you want?” and keep walking.


    You say, “OK, Glass. Where can I get good Mexican food?” 40 friends have favorably rated Rosa’s Cafe. Would you like directions? “Yes.” The navigation starts, and you’re en route.

    You reach the cafe, but Jon is 10 minutes away. Would you like an audiobook while you wait? “No, play music.” A smart playlist compiles exactly 10 minutes of music that perfectly fits your mood.

    “OK, Glass. Play Angry Birds 4.”

    Across the table, 3-D versions of the little green piggies and their towers materialize.

    In front of you are a red bird, a yellow bird, two blue birds and a slingshot. The red bird jumps up, you pull back on the slingshot, the trajectory beam shows you a path across the table, you let go and knock down a row of bad piggies.

    Suddenly, an idea comes to you. “OK, Glass. Switch to Evernote.”

    A piece of paper and a pen are projected onto the table in front of you, and a bulletin board appears to the left.

    You pick up the AR pen, jot down your note, move the paper to the appropriate bulletin, and return to Angry Birds.

    You could make your game visible to other Glass wearers. That way, others could play with you — or, at the very least, would know you’re not some crazy person pretending to do… whatever you’re doing across the table.

    When Jon arrives, notifications are disabled. You push the menu icon on the table and select your meal. Your meal arrives; you take photos of your food; eat; publish to Instagram 7.

    Before you leave, the restaurant gives a polite notification, letting you know that a coupon for 10% off will be sent to your phone if you write a review.

    How Wearable Technology Interacts With Desktops

    Later, having finished the cooking tutorial on the desktop, you decide it’s time to make the meal for real. You put on Glass and go to the store. The headset guides you directly to the brands that were advertised “in game.” After picking out your ingredients, you receive a notification that a manufacturer’s coupon has been sent to your phone and can be used at the check-out.

    When you get home, you lay a carrot on the cutting board and an overlay projects guidelines on where to cut. You lay out the meat, and a POW graphic is overlaid, showing you where to hit for optimal tenderness:

    Augmented Meat
    Image source?

    You put the meat in the oven; Glass starts the timer. You put the veggies in the pan; Glass overlays a pattern to show where and when to stir.

    While you were at the store, Glass helped you to pick out the perfect bottle of wine to pair with your meal (based on reviews, of course). So, you pour yourself a glass and relax while you wait for the timer to go off.

    In the future, augmented real-world UX experiences will be turned into real business. The more you enhance real life, the more successful your business will be. After all, is it really difficult to imagine this cooking experience being turned into a game?

    What Can We Do About This Today?

    If you’re the kind of UI designer who seeks to push boundaries, then the best thing you can do right now is think. Because the technology isn’t 100% available, the best you can do is open your imagination to what will be possible when the average person has evolved beyond the keyboard and mouse.

    Draw inspiration from websites and software that simulate depth to create dynamic, layered experiences that can be easily operated without a mouse. The website of agency Black Negative is a good example of future-inspired “flat” interaction. It’s easy to imagine interacting with this website without needing a mouse. The new Myspace is another.

    To go really deep, look at the different Chrome Experiments, and find a skilled HTML5 and WebGL developer to discuss what’s in store for the future. The software and interactions that come from your mind will determine whether these technologies will be useful.


    While everything I’ve talked about here is conceptual, I’m curious to hear what you think about how (or even if) these devices will affect UIs. I’d also love to hear your vision of future UIs.

    To get started, let me ask you two questions:

    1. How will the ability to reach into the screen and interact with the virtual world shape our expectations of computing?
    2. How will untethering content from flat surfaces fundamentally change the medium?

    I look forward to your feedback. Please share this article if you’ve enjoyed this trip into the future.

    (il) (al)

    © Tommy Walker for Smashing Magazine, 2013.

  • Building An App In 45 Minutes With Meteor


    The other day, I finally accomplished one of my long-standing goals: to go from one of those “Wouldn’t it be cool…” ideas to a working, live app in less than 1 hour. 45 minutes, actually.

    It all started with a design meet-up in San Francisco. I can honestly say this was the best meet-up I’ve ever been to: Even though it was announced only two days in advance, more than 200 people RSVPed, and a good number of them showed up. It was a great chance to put faces to familiar names, as well as to make new friends.

    But I got to talking with so many people that I didn’t have a chance to get contact info for everybody. So, the next day, I asked the organizers about it and they suggested that everyone who attended leave a link to their Twitter account in a shared Google Doc.

    That would work, but I was afraid it would prove to be too much effort. If I’ve learned one thing in my years as a designer, it’s that people are lazy. Instead, what if I built an app that lets the user add their Twitter account to a list in a single click?

    The app would work something like this:

    1. The user signs into Twitter,
    2. A link to their Twitter profile appears on the page,
    3. That’s pretty much it!

    With my list of requirements complete, I set to work to see how fast I could build this, and I thought it’d be interesting to walk you through the process.

    At first, take a peek at how the final app looked like:

    Our very bare-bones (but working!) app.
    Our final bare-bones (but working!) app.

    You can also see a demo of the finished product, and find the code on GitHub. (Note: Give it some time to load. Apps hosted on Meteor’s free hosting service often slow down under a lot of traffic.)

    A word of warning: This won’t be a traditional tutorial. Instead, it will be a play-by-play walkthrough of how I coded the app in one hour, including the usual dumb mistakes and wrong turns.

    Introducing Meteor

    I decided to build the app with Meteor. Meteor is a fairly young JavaScript framework that works on top of Node and has a few interesting characteristics.

    The Meteor homepage
    Meteor’s home page

    First, it’s all JavaScript, so you don’t need to deal with one language in the browser and another on the server. That’s right: the same language you use to set up jQuery slider plugins can also be used to query your app’s database! The added benefit of this is that your app now has only a single code base — meaning you can make the same code accessible from both the client and server if you need to.

    Meteor is also reactive, meaning that any change to your data is automatically reflected everywhere throughout the app (including the user interface) without the need for callbacks. This is a powerful feature. Imagine adding a task to a to-do list. With reactivity, you don’t need a callback to insert the new HTML element into the list. As soon as Meteor receives the new item, it automatically propagates the change to the user interface, without any intervention on your part!

    What’s more, Meteor is real time, so both your changes and the changes made by other users are instantly reflected in the UI.

    Like many other modern frameworks, Meteor also speeds up your Web app by transforming it into a single-page Web app. This means that instead of refreshing the whole browser window every time the user changes the page or performs an action, Meteor modifies only the part of the app that actually changes without reloading the rest, and then it uses the HTML5 pushState API to change the URL appropriately and make the back button work.

    Not having to update the whole page enables another very powerful feature. Instead of sending HTML code over the network, Meteor sends the raw data and lets the client decide how to render it.

    Finally, one of my favorite features of Meteor is simply that it automates a lot of boring tasks, such as linking up and minifying style sheets and JavaScript code. It also takes care of routine stuff for you on the back end, letting you add user accounts to the app with a single line of code.

    I’ve been experimenting with Meteor for the past six months, using it first to build Telescope (an open-source social news app), and then in turn using Telescope as a base to create Sidebar (a design links website), and I’ve just released a book about it. I believe that, more than any other framework, Meteor helps you get from idea to app in the shortest possible amount of time. So, if all of this has made you curious, I recommend you give it a try and follow along this short walkthrough.

    Step 0: Install Meteor (5 Minutes)

    First, let’s install Meteor. If you’re on Mac or Linux, simply open a Terminal window and type:

    curl | /bin/sh

    Installing Meteor on Windows is a little trickier; you can refer to this handy guide to get started.

    Step 1: Create The App (1 Minute)

    Creating a Meteor app is pretty easy. Once you’ve installed Meteor, all you need to do is go back to the Terminal and type this:

    meteor create myApp

    You’ll then be able to run your brand new app locally with this:

    cd myApp
    meteor myApp

    In my case, I decided to call my app twitterList, but you can call yours whatever you want!

    Once you run the app, it will be accessible at http://localhost:3000/ in your browser.

    Step 2: Add Packages (1 Minute)

    Because I want users to be able to log in with Twitter, the first step is to set up user accounts. Thankfully, Meteor makes this trivially easy as well. First, add the required Meteor packages, accounts-ui and (since we want users to log in with Twitter) accounts-twitter.

    Open up a new Terminal window (since your app is already running in the first one) and enter:

    meteor add accounts-ui
    meteor add accounts-twitter

    You’ll now be able to display a log-in button just by inserting {{loginButtons}} anywhere in your Handlebars code.

    A more complex version of the accounts-ui widget, as seen in Telescope
    A more complex version of the accounts-ui widget, as seen in Telescope.

    I didn’t want to have to bother with styling, so I decided to also include Twitter Bootstrap with my app.

    I went to the Twitter Bootstrap website, downloaded the framework, extracted the ZIP file, copied it to my app’s Meteor directory, and then hooked up the required CSS files in the head of my app’s main file.

    Ha ha, not really. What is this, 2012? That’s not how it works with Meteor. Instead, we just go back to the Terminal and type:

    meteor add bootstrap

    Client Vs. Server

    I guess at this point I should briefly tell you more about how Meteor apps work. First, we’ve already established that a Meteor app’s code is all JavaScript. This JavaScript can be executed in the browser like regular JavaScript code (think a jQuery plugin or an alert() message), but can additionally be executed on the server (like PHP or Ruby code). What’s more, the same code can even be executed in both environments!

    So, how do you keep track of all this? It turns out Meteor has two mechanisms to keep client and server code separate: the Meteor.isClient and Meteor.isServer booleans, and the /client and /server directories.

    I like to keep things clean; so, unlike the default Meteor app that gets generated with meteor create (which uses the booleans), I’d rather use separate directories.

    Also, note that anything that isn’t in the /client or /server directories will be executed in both environments by default.

    Since our app is pretty simple, we won’t actually have any custom server-side code (meaning that Meteor will take care of that part for us). So you can go ahead and create a new /client directory, and  move twitterList.html and twitterList.js (or however your files are called) to it now.

    Step 3: Create the Markup (10 Minutes)

    I like to start from a static template and then fill in the holes with dynamic data, so that’s what I did. Just write your template as if it were static HTML, except replace every “moving part” with Handlebars tags. So, something like this…

     <a href="">Sacha Greif</a></p>

    … becomes this:

     <a href="{{userName}}">{{fullName}}</a>

    Of course, those tags won’t do anything yet and will appear blank. But we’ll match them up with real data pretty soon. Next, I deleted the contents of twitterlist.html and got to work on my HTML. This is the code I had after this step:

      <title>Who Was There?</title>
      <div class="container">
        <div class="row">
        <div class="span6">
          <div class="well">
            <h4>Did you go to the <a href="">Designer Potluck</a>? Sign in with Twitter to add your name.</h4>
          <table class="table">
      <a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>

    Step 4: Configure Twitter Sign-In (3 Minutes)

    You’ll have noticed the {{loginButtons}} Handlebars tag, which inserts a log-in button on your page. If you try to click it right now, it won’t work, and Meteor will ask you for additional information.

    You need to fill in your app's Twitter credentials.
    You’ll need to fill in your app’s Twitter credentials. Larger view.

    To get this information, we first need to tell Twitter about our app. Follow the steps on the screen and create a new Twitter app; once you’re done, try logging in. If everything has worked right, you should now have a user account in the app!

    Creating a new Twitter app.
    Creating a new Twitter app. Larger view.

    To test this out, open your browser’s console (in the WebKit inspector or in Firebug) and type this:


    This will retrieve the currently logged-in user, and, if everything has gone right, it will give you your own user object in return (something like Object {_id: "8ijhgK5icGrLjYTS7", profile: Object, services: Object}).

    Step 5: Split It Into Templates (5 Minutes)

    You’ll have noticed that our HTML has room to display only a single user. We’ll need some kind of loop to iterate over the whole list. Thankfully, Handlebars provides us with the {{#each xyz}}{{/each}} helper (where xyz are the objects you want to iterate on, usually an array), which does just that.

    We’ll also split the code into a few templates to keep things organized. The result is something like this:

      <title>Who Was There?</title>
      <div class="container">
        {{> content}}
    <template name="content">
      <div class="row">
        <div class="span6">
          <div class="well">
          <table class="table">
          {{#each users}}
            {{> user}}
    <template name="user">
      <a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>

    Step 6: Hook Up Our Template (5 Minutes)

    Our template is all set up, but it’s iterating over empty air. We need to tell it what exactly this users variable in the {{#each users}} block is. This block is contained in the content template, so we’ll give that template a template helper.

    Delete the contents of twitterlist.js, and write this instead:

    Template.content.users = function () {
      return Meteor.users.find();

    What we’re doing here is defining Template.content.users as a function that returns Meteor.users.find().

    Meteor.users is a special collection created for us by Meteor. Collections are Meteor’s equivalent of MySQL tables. In other words, they’re a list of items of the same type (such as users, blog posts or invoices). And find() simply returns all documents in the collection.

    We’ve now told Meteor where to find that list of users, but nothing’s happening yet. What’s going on?

    Step 7: Fix Our Tags (5 Minutes)

    Remember when we typed this?

    <a target="_blank" href="{{userName}}"><img src="{{image}}"/> {{fullName}}</a>

    The {{userName}}, {{image}} and {{fullName}} are just random placeholders that I picked for the sake of convenience. We’d be pretty lucky if they corresponded to actual properties of our user object! (Hint: they don’t.)

    Let’s find out the “real” properties with the help of our friend, the browser console. Open it up, and once more type this:


    The object returned has all of the fields we need. By exploring it, we can quickly find out that the real properties are actually these:

    • {{services.twitter.screenName}}
    • {{services.twitter.profile_image_url}}
    • {{}}

    Let’s make the substitutions in our template and see what happens.

    It works! Our first and only user (you!) should now appear in the list. We’re still missing some fields, though, and only the user’s full name appears. We need to dig deeper into Meteor to understand why.

    A Database On The Client

    We haven’t really touched on what Meteor does behind the scenes yet. Unlike, say, PHP and MySQL, with which your data lives only on the server (and stays there unless you extract it from the database), Meteor replicates your server-side data in the client and automatically syncs both copies.

    This accomplishes two things. First, reading data becomes very fast because you’re reading from the browser’s own memory, and not from a database somewhere in a data center.

    Secondly, modifying data is extremely fast as well, because you can just modify the local copy of the data, and Meteor will replicate the changes for you server-side in the background. But this new paradigm comes with a caveat: We have to be more careful with data security.

    Step 8: Make the App Secure (1 Minute)

    We’ll address data security in terms of both writing and reading. First, let’s prevent people from writing whatever they want to our database. This is simple enough because all we need to do is remove Meteor’s insecure package:

    meteor remove insecure

    This package comes bundled with every new Meteor app to speed up development (letting you insert data client-side without having to set up all of the necessary checks and balances first), but it is obviously not meant for production. And because our app won’t need to write to the database at all (except for creating new users — but that’s a special case that Meteor already takes care of), we’re pretty much done!

    More On Security

    While we’re on the topic of security, Meteor apps also come with a second default package, autopublish, which takes care of sending all of the data contained in your server-side collections to the client.

    Of course, for a larger app, you probably won’t want to do that. After all, some of the information in your database is supposed to remain private, and even if all your data is public, sending all of it to the client might not be good for performance.

    In our case, this doesn’t really matter because we do want to “publish” (i.e. send from the server to the client) all of our users. Don’t worry, though — Meteor is still smart enough not to publish sensitive information, such as passwords and authentication tokens, even with autopublish on.

    Step 9: Add Follow Buttons (8 Minutes)

    While visitors can now click on a name to go to their Twitter profile, simply displaying follow buttons for each user would be much better. This step took a little tinkering to get right. It turns out that Twitter’s default follow button code doesn’t play nice with Meteor.

    After 15 minutes of unsuccessful attempts, I turned to the Google and quickly found that for single-page apps, Twitter suggests using an iframe instead.

    This worked great:

    <iframe style="width: 300px; height: 20px;" src="//{{services.twitter.screenName}}" height="240" width="320" frameborder="0" scrolling="no"></iframe>

    Step 10: Deploy (1 Minute)

    The last step is to deploy our app and test it in production. Once again, Meteor makes this easy. No need to find a hosting service, register, launch an instance, and do a Git push. All you need to do is go back to the Terminal and type this:

    meteor deploy myApp

    Here, myApp is a unique subdomain that you pick (it doesn’t have to be the same as the app’s name). Once you’ve deployed, your app will live at Go ahead and ask a few people to register: You’ll see their Twitter profiles added to the list in real time!

    Going Further

    Of course, I had to gloss over a lot of key Meteor concepts to keep this tutorial light. I barely mentioned collections and publications, and I didn’t even really talk about Meteor’s most important concept, reactivity. To learn more about Meteor, here are a few good resources:

    • Documentation, Meteor
      This is a required reference for any Meteor developer. And it’s cached, meaning you can even access it offline.
    • EventedMind
      Chris Mather puts out two Meteor screencasts every Friday. They’re a great help when you want to tackle Meteor’s more advanced features.
    • Discover Meteor
      I’m obviously biased, but I think our book is one of the best resources to get started with Meteor. It takes you through building a real-time social news app (think Reddit or Hacker News) step by step.
    • Blog, Discover Meteor
      We also make a lot of information available for free on our blog. We suggest looking at “Getting Started With Meteor” and “Useful Meteor Resources.”
    • Prototyping With Meteor
      A tutorial we wrote for NetTuts that takes you through building a simple chat app.

    I truly believe Meteor is one of the best frameworks out there for quickly building apps, and it’s only going to get better. Personally, I’m really excited to see how the framework evolves in the next couple of months. I hope this short tutorial has given you a taste of what Meteor’s all about and has made you curious to learn more!

    (il) (ea) (al)

    © Sacha G for Smashing Magazine, 2013.

  • Facing The Challenge: Building A Responsive Web Application


    We are talking and reading a lot about responsive Web design (RWD) these days, but very little attention is given to Web applications. Admittedly, RWD still has to be ironed out. But many of us believe it to be a strong concept, and it is here to stay. So, why don’t we extend this topic to HTML5-powered applications? Because responsive Web applications (RWAs) are both a huge opportunity and a big challenge, I wanted to dive in.

    Building a RWA is more feasible than you might think. In this article, we will explore ideas and solutions. In the first part, we will set up some important concepts. We will build on these in the second part to actually develop a RWA, and then explore how scalable and portable this approach is.

    Part 1: Becoming Responsible

    Some Lessons Learned

    It’s not easy to admit, but recently it has become more and more apparent that we don’t know many things about users of our websites. Varying screen sizes, device features and input mechanisms are pretty much RWD’s reasons for existence.

    From the lessons we’ve learned so far, we mustn’t assume too much. For instance, a small screen is not necessarily a touch device. A mobile device could be over 1280 pixels wide. And a desktop could have a slow connection. We just don’t know. And that’s fine. This means we can focus on these things separately without making assumptions: that’s what responsiveness is all about.

    Progressive Enhancement

    The “JavaScript-enabled” debate is so ’90s. We need to optimize for accessibility and indexability (i.e. SEO) anyway. Claiming that JavaScript is required for Web apps and, thus, that there is no real need to pre-render HTML is fair (because SEO is usually not or less important for apps). But because we are going responsive, we will inherently pay a lot attention to mobile and, thus, to performance as well. This is why we are betting heavily on progressive enhancement.

    Responsive Web Design

    RWD has mostly to do with not knowing the screen’s width. We have multiple tools to work with, such as media queries, relative units and responsive images. No matter how wonderful RWD is conceptually, some technical issues still need to be solved.

    Not many big websites have gone truly responsive since The Boston Globe. (Image credits: Antoine Lefeuvre)

    Client-Side Solutions

    In the end, RWD is mostly about client-side solutions. Assuming that the server basically sends the same initial document and resources (images, CSS and JavaScript) to every device, any responsive measures will be taken on the client, such as:

    • applying specific styles through media queries;
    • using (i.e. polyfilling) <picture> or @srcset to get responsive images;
    • loading additional content.

    Some of the issues surrounding RWD today are the following:

    • Responsive images haven’t been standardized.
    • Devices still load the CSS behind media queries that they never use.
    • We lack (browser-supported) responsive layout systems (think flexbox, grid, regions, template).
    • We lack element queries.

    Server-Side Solutions: Responsive Content

    Imagine that these challenges (such as images not being responsive and CSS loading unnecessarily) were solved on all devices and in all browsers, and that we didn’t have to resort to hacks or polyfills in the client. This would transfer some of the load from the client to the server (for instance, the CMS would have more control over responsive images).

    But we would still face the issue of responsive content. Although many believe that the constraints of mobile help us to focus, to write better content and to build better designs, sometimes it’s simply not enough. This is where server-side solutions such as RESS and HTTP Client Hints come in. Basically, by knowing the device’s constraints and features up front, we can serve a different and optimized template to it.

    Assuming we want to COPE, DRY and KISS and stuff, I think it comes down to where you want to draw the line here: the more important that performance and content tailored to each device is, the more necessary server-side assistance becomes. But we also have to bet on user-agent detection and on content negation. I’d say that this is a big threshold, but your mileage may vary. In any case, I can see content-focused websites getting there sooner than Web apps.

    Having said that, I am focusing on RWAs in this article without resorting to server-side solutions.

    Responsive Behavior

    RWD is clearly about layout and design, but we will also have to focus on responsive behavior. It is what makes applications different from websites. Fluid grids and responsive images are great, but once we start talking about Web applications, we also have to be responsive in loading modules according to screen size or device capability (i.e. pretty much media queries for JavaScript).

    For instance, an application might require GPS to be usable. Or it might contain a large interactive table that just doesn’t cut it on a small screen. And we simply can’t set display: none on all of these things, nor can we build everything twice.

    We clearly need more.

    Part 2: Building RWAs

    To quickly recap, our fundamental concepts are:

    • progressive enhancement,
    • responsive design,
    • responsive behavior.

    Fully armed, we will now look into a way to build responsive, context-aware applications. We’ll do this by declaratively specifying modules, conditions for loading modules, and extended modules or variants, based on feature detection and media queries. Then, we’ll dig deeper into the mechanics of dependency injection to see how all of this can be implemented.

    Declarative Module Injection

    We’ll start off by applying the concepts of progressive enhancement and mobile first, and create a common set of HTML, CSS and JavaScript for all devices. Later, we’ll progressively enhance the application based on content, screen size, device features, etc. The foundation is always plain HTML. Consider this fragment:

    <div data-module="myModule">
        <p>Pre-rendered content</p>

    Let’s assume we have some logic to query the data-module attribute in our document, to load up the referenced application module (myModule) and then to attach it to that element. Basically, we would be adding behavior that targets a particular fragment in the document.

    This is our first step in making a Web application responsive: progressive module injection. Also, note that we could easily attach multiple modules to a single page in this way.

    Conditional Module Injection

    Sometimes we want to load a module only if a certain condition is met — for instance, when the device has a particular feature, such as touch or GPS:

    <div data-module="find/my/dog" data-condition="gps">
        <p>Pre-rendered fallback content if GPS is unavailable.</p>

    This will load the find/my/dog module only if the geolocation API is available.

    Note: For the smallest footprint possible, we’ll simply use our own feature detection for now. (Really, we’re just checking for 'geolocation' in navigator.) Later, we might need more robust detection and so delegate this task to a tool such as Modernizr or Has.js (and possibly PhoneGap in hybrid mode).

    Extended Module Injection

    What if we want to load variants of a module based on media queries? Take this syntax:

    <div data-module="myModule" data-variant="large">
        <p>Pre-rendered content</p>

    This will load myModule on small screens and myModule/large on large screens.

    For brevity, this single attribute contains the condition and the location of the variant (by convention). Programmatically, you could go mobile first and have the latter extend from the former (or separated modules, or even the other way around). This can be decided case by case.

    Media Queries

    Of course, we couldn’t call this responsive if it wasn’t actually driven by media queries. Consider this CSS:

    @media all and (min-width: 45em) {
    	body:after {
    		content: 'large';
    		display: none;

    Then, from JavaScript this value can be read:

    var size = window.getComputedStyle(document.body,':after').getPropertyValue('content');

    And this is why we can decide to load the myModule/large module from the last example if size === "large", and load myModule otherwise. Being able to conditionally not load a module at all is useful, too:

    <div data-module="myModule" data-condition="!small">
        <p>Pre-rendered content</p>

    There might be cases for media queries inside module declarations:

    <div data-module="myModule" data-matchMedia="min-width: 800px">
        <p>Pre-rendered content</p>

    Here we can use the window.matchMedia() API (a polyfill is available). I normally wouldn’t recommend doing this because it’s not very maintainable. Following breakpoints as set in CSS seems logical (because page layout probably dictates which modules to show or hide anyway). But obviously it depends on the situation. Targeted element queries may also prove useful:

    <div data-module="myModule" data-matchMediaElement="(min-width: 600px)"></div>

    Please note that the names of the attributes used here represent only an example, a basic implementation. They’re supposed to clarify the idea. In a real-world scenario, it might be wise to, for example, namespace the attributes, to allow for multiple modules and/or conditions, and so on.

    Device Orientation

    Take special care with device orientation. We don’t want to load a different module when the device is rotated. So, the module itself should be responsive, and the page’s layout might need to accommodate for this.

    Connecting The Dots

    The concept of responsive behavior allows for a great deal of flexibility in how applications are designed and built. We will now look into where those “modules” come in, how they relate to application structure, and how this module injection might actually work.

    Applications and Modules

    We can think of a client-side application as a group of application modules that are built with low-level modules. As an example, we might have User and Message models and a MessageDetail view to compose an Inbox application module, which is part of an entire email client application. The details of implementation, such as the module format to be used (for example, AMD, CommonJS or the “revealing module” pattern), are not important here. Also, defining things this way doesn’t mean we can’t have a bunch of mini-apps on a single page. On the other hand, I have found this approach to scale well to applications of any size.

    A Common Scenario

    An approach I see a lot is to put something like <div id="container"> in the HTML, and then load a bunch of JavaScript that uses that element as a hook to append layouts or views. For a single application on a single page, this works fine, but in my experience it doesn’t scale well:

    • Application modules are not very reusable because they rely on a particular element to be present.
    • When multiple applications or application modules are to be instantiated on a single page, they all need their own particular element, further increasing complexity.

    To solve these issues, instead of letting application modules control themselves, what about making them more reusable by providing the element they should attach to? Additionally, we don’t need to know which modules must be loaded up front; we will do that dynamically. Let’s see how things come together using powerful patterns such as Dependency Injection (DI) and Inversion of Control (IOC).

    Dependency Injection

    You might have wondered how myModule actually gets loaded and instantiated.

    Loading the dependency is pretty easy. For instance, take the string from the data-module attribute (myModule), and have a module loader fetch the myModule.js script.

    Let’s assume we are using AMD or CommonJS (either of which I highly recommended) and that the module exports something (say, its public API). Let’s also assume that this is some kind of constructor that can be instantiated. We don’t know how to instantiate it because we don’t know exactly what it is up front. Should we instantiate it using new? What arguments should be passed? Is it a native JavaScript constructor function or a Backbone view or something completely different? Can we make sure the module attaches itself to the DOM element that we provide it with?

    We have a couple of possible approaches here. A simple one is to always expect the same exported value — such as a Backbone view. It’s simple but might be enough. It would come down to this (using AMD and a Backbone view):

    var moduleNode = document.querySelector('[data-module]'),
        moduleName = node.getAttribute('data-module');
    require([moduleName], function(MyBackBoneView) {
        new MyBackBoneView({
            el: moduleNode

    That’s the gist of it. It works fine, but there are even better ways to apply this pattern of dependency injection.

    IOC Containers

    Let’s take a library such as the excellent wire.js library by cujoJS. An important concept in wire.js is “wire specs,” which essentially are IOC containers. It performs the actual instantiation of the application modules based on a declarative specification. Going this route, the data-module should reference a wire spec (instead of a module) that describes what module to load and how to instantiate it, allowing for practically any type of module. Now, all we need to do is pass the reference to the spec and the viewNode to wire.js. We can simply define this:

    wire([specName, { viewNode: moduleNode }]);

    Much better. We let wire.js do all of the hard work. Besides, wire has a ton of other features.

    In summary, we can say that our declarative composition in HTML (<div data-module="">) is parsed by the composer, and consults the advisor about whether the module should be loaded (data-condition) and which module to load (data-module or data-variant), so that the dependency injector (DI, wire.js) can load and apply the correct spec and application module:

    Declarative Composition

    Detections for screen size and device features that are used to build responsive applications are sometimes implemented deep inside application logic. This responsibility should be laid elsewhere, decoupled more from the particular applications. We are already doing our (responsive) layout composition with HTML and CSS, so responsive applications fit in naturally. You could think of the HTML as an IOC container to compose applications.

    You might not like to put (even) more information in the HTML. And honestly, I don’t like it at all. But it’s the price to pay for optimized performance when scaling up. Otherwise, we would have to make another request to find out whether and which module to load, which defeats the purpose.

    Wrapping Up

    I think the combination of declarative application composition, responsive module loading and module extension opens up a boatload of options. It gives you a lot of freedom to implement application modules the way you want, while supporting a high level of performance, maintainability and software design.

    Performance and Build

    Sometimes RWD actually decreases the performance of a website when implemented superficially (such as by simply adding some media queries or extra JavaScript). But for RWA, performance is actually what drives the responsive injection of modules or variants of modules. In the spirit of mobile first, load only what is required (and enhance from there).

    Looking at the build process to minify and optimize applications, we can see that the challenge lies in finding the right approach to optimize either for a single application or for reusable application modules across multiple pages or contexts. In the former case, concatenating all resources into a single JavaScript file is probably best. In the latter case, concatenating resources into a separate shared core file and then packaging application modules into separate files is a sound approach.

    A Scalable Approach

    Responsive behavior and complete RWAs are powerful in a lot of scenarios, and they can be implemented using various patterns. We have only scratched the surface. But technically and conceptually, the approach is highly scalable. Let’s look at some example scenarios and patterns:

    • Sprinkle bits of behavior onto static content websites.
    • Serve widgets in a portal-like environment (think a dashboard, iGoogle or Netvibes). Load a single widget on a small screen, and enable more as screen resolution allows.
    • Compose context-aware applications in HTML using reusable and responsive application modules.

    In general, the point is to maximize portability and reach by building on proven concepts to run applications on multiple platforms and environments.

    Future-Proof and Portable

    Some of the major advantages of building applications in HTML5 is that they’re future-proof and portable. Write HTML5 today and your efforts won’t be obsolete tomorrow. The list of platforms and environments where HTML5-powered applications run keeps growing rapidly:

    • As regular Web applications in browsers;
    • As hybrid applications on mobile platforms, powered by Apache Cordova (see note below):
      • iOS,
      • Android,
      • Windows Phone,
      • BlackBerry;
    • As Open Web Apps (OWA), currently only in Firefox OS;
    • As desktop applications (such as those packaged by the Sencha Desktop Packager):
      • Windows,
      • OS X,
      • Linux.

    Note: Tools such as Adobe PhoneGap Build, IBM Worklight and Telerik’s Icenium all use Apache Cordova APIs to access native device functionality.


    You might want to dive into some code or see things in action. That’s why I created a responsive Web apps repository on GitHub, which also serves as a working demo.


    Honestly, not many big websites (let alone true Web applications) have gone truly responsive since The Boston Globe. However, looking at deciding factors such as cost, distribution, reach, portability and auto-updating, RWAs are both a huge opportunity and a big challenge. It’s only a matter of time before they become much more mainstream.

    We are still looking for ways to get there, and we’ve covered just one approach to building RWAs here. In any case, declarative composition for responsive applications is quite powerful and could serve as a solid starting point.

    (al) (ea)

    © Lars Kappert for Smashing Magazine, 2013.

  • Smashing Conference 2013: A Community Event That Will Change Everything


    Update (13.06.2013): The SmashingConf 2013 is sold out just 48h after the ticket sales launch. However, some workshop tickets are still available. We can’t wait to welcome you, dear attendees, in September in Freiburg! In fact, we’ve got quite a few surprises waiting for you; please stay tuned. You won’t be disappointed.

    Guess what? The Smashing Conference is coming! 2 single-track conference days, 3 full-day workshops, 16 excellent speakers, and only 300 available seats. We’d be honoured to welcome you in our home town Freiburg, on September 9–11th 2013, at the foot of the legendary, beautiful Black Forest in Southern Germany.

    You can enjoy the well-known “Münsterwurst” opposite the conference venue, also known as a “lange Rote”. Of course, there are also vegetarian tofu sausages available. Image credit.

    SmashingConf 2013: We Are Building The Web

    We want the Smashing Conference to be a unique, valuable and friendly event for everybody involved. Not only do we want to provide new perspectives into Web design in general; more importantly, we want the event to focus on how exactly we, designers and developers, work, design and code to solve real-life problems.

    Tickets can now be purchased via the SmashingConf 2013 page.

    We’ll explore in detail what techniques, strategies and tools we use, but also what lessons we can learn from our personal experiences, successes and failures. Smashing Conference is a conference about how we work and play — and how we build the Web today.

    Get your ticket!

    Speakers and Workshops

    We’ve invited excellent speakers who all have something to share, be it personal experiences or case-studies on large projects. We’re very happy to welcome Jason Santa Maria, Dan Mall, Luke Wroblewski, Dan Rubin, Inayaili de Leon, David Březina, Andy Hume, Tim Kadlec and — of course — the Mystery Speaker among the first confirmed speakers. You can find more information about the talks on the SmashingConf 2013 Speakers page.

    Taking notes the old-fashioned way. Image credit.

    We are also happy to announce 3 full-day and 1 half-day practical, hands-on workshops that will take place after the two main conference days: Dan Mall will be teaching how to be the typographer you were born to be, Dan Rubin will be exploring how to design for User Experience, Luke Wroblewski will be exploring how we can improve Web forms for mobile, and Addy Osmani will be teaching how to become the real front-end warrior.

    Does it all sounds good to you? Well, it gets even better: If you purchase a Conference Ticket + Workshop ticket, you can save 15% off the regular price! So what are you waiting for?

    Date, Location, Prices

    The conference will take place on 9–11th of September in 2013 in the Historic Merchant Hall in our lovely home town Freiburg, Germany. We have only 300 seats, so the number of tickets is very limited. Also, we’ve blocked every single hotel in town for you — you can select your hotel at the SmashingConf 2013 Location page (“Smashing Conference” is the magic word for hotel booking!).

    Image credit.

    AirBnB could be a good option as well, but be fast — there aren’t many rooms available. Freiburg is known for being a vacation place, so perhaps you’d like to combine the conference with a relaxing family vacation in the skirts of the Black Forest? We’ve prepared some information about the location of Freiburg, so you don’t get lost in the suburbs of the Southern Germany while travelling here. Now if that isn’t an experience of a life-time, what is? (No, seriously, you will enjoy it!).

    The price per ticket is €369 (incl. VAT). We’re organzing 3 full-day (€349) and 1 half-day (€229) workshops. Make sure to actually consider booking a workshop as well — if you buy both a conf ticket and a workshop, you save 15% off the price right away. Three days of learning, sharing, networking — it’s getting better and better! No lunch will be provided, but drinks and snacks will be provided, of course.

    Get your ticket!

    Sponsors, Dear Sponsors

    We keep the ticket prices affordable for everyone, and we’re happy to welcome sponsors to help us make the conference smashing in every possible way. If you’re interested in sponsoring the event, please contact Vitaly at hello [@] smashingconf [dot] com. We love our sponsors, and you make the event possible, and we’d be honoured to have you involved!

    Snapshot of SmashingMag’s 2012 Conference badge and lanyard. Image credit.

    • “This conference was well worth the money and I hope we will make it next year. Well done Smashing Magazine!” (Source)
    • “The two days of talks offered a very good balance of design-, philosophy-, typography-, and development-based topics. [...] Thanks to all who attended, organized, cared, spoke and made this an overall very great event!” (Source)
    • “It was an immensely successful conference [...] The talks, the beer and the interesting people definitely made me want to go visit Freiburg again next year!” (Source)


    Follow us at @smashingconf and get more details on the SmashingConf 2013 website. Also, more speakers will be announced soon, so get ready to be smashed with a few exciting surprises and announcements!

    Questions? Shoot us an email anytime at — we’d love to assist you in every possible way and would be humbled and happy to welcome you in our lovely home town Freiburg this September!

    Auf Wiedersehen!

    Get your ticket!

    © Vitaly Friedman for Smashing Magazine, 2013.

  • Front-End Ops


    When a team builds a complex application, there is often a common breakdown of roles. Specifically on the back end, there are database engineers, application engineers and operations engineers, or something close to this. In recent years, more and more application logic is being deferred to the client side. For some reason, though, operations folks aren’t going with it.

    I recently wrote an article on “Deploying JavaScript Applications.” It was largely well received, and I was happy with the content, but one negative comment stuck out to me. I probably didn’t have the reaction that the commenter was intending, but it pointed out something to me nonetheless.

    “With all due respect, may I ask if you actually enjoy your job? I am a dev, and I do enjoy using tech to do stuff to a point. If your role is to squeeze every last second of performance out of your app, then yea, all this stuff must be cool. BUT if you are a coder doing something else and then come back to all of this as well, then wow, I don’t know how you haven’t gone mad already. I’d be sick to the stomach if I had to do all of this, in addition to my usual work.”

    See, I had written my article with a few too many assumptions. I understood ahead of time that a few of my solutions weren’t globally applicable, and that many people wouldn’t have the time or energy to implement them. What I didn’t fully grasp was how different the role in that article is from the picture that people have of a front-end developer in their head. Up to this point, a front-end developer had just the few operations duties lumped into their role, and even then, many people chose to skip those steps (that’s why Steve Souders is constantly yelling at you to make your pages faster).

    I think things are about to shift, and I’d (humbly) like to help guide that shift, because I think it’ll be great for the Web.

    The Front-End Operations Engineer

    A front-end operations engineer is not a title you’ve likely come across, but hopefully one that you will. Such a person would need to be an expert at serving and hosting front-end resources. They’d need to be pros at Grunt (or something similar) and have strong opinions about modules. They would find the best ways to piece together the parts of a Web application, and they’d be pros at versioning, caching and deployment.

    A front-end operations engineer would own external performance. They would be critical of new HTTP requests, and they would constantly be measuring file size and page-load time. They wouldn’t necessarily always worry about the number of times that a loop can run in a second — that’s still an application engineer’s job. They own everything past the functionality. They are the bridge between an application’s intent and an application’s reality.

    A front-end operations engineer would be very friendly with the quality assurance team, and they would make sure that “performance” is a test that comes up green. They’d monitor client-side errors and get alerts when things go wrong. They’d make sure that migrations to new versions of the application go smoothly, and they’d keep all external and internal dependencies up to date, secure and stable. They are the gatekeepers of the application.


    We have reached a point where there is enough work to be done in the operations space that it often no longer serves us to have an application engineer do both jobs. When the application’s features are someone’s priorities, and that person has a full plate, they will typically deprioritize the critical steps in delivering their application most successfully to the end users.

    Not every company or team can afford this person, but even if someone puts on the “front-end operations” hat for one day a week and prioritizes their work accordingly, users win. It doesn’t matter how many features you have or how sexy your features are if they aren’t delivered to the user quickly, with ease, and then heavily monitored. Front-end operations engineers are the enablers of long-term progress.

    Builds And Deployment

    If you were to ask most back-end engineers which person on their team has traditionally worried about builds and deployment, I’m sure you’d get a mixed bag. However, a very sizeable chunk of engineers would tell you that they have build engineers or operations engineers who handle these things. In that world, this often entails generating an RPM file, spinning up EC2 instances, running things through continuous integration tools, and switching load balancers over to new machines. Not all of this will necessarily go away for a front-end operations engineer, but there will be new tools as well.

    A front-end operations engineer will be a master of the build tool chain. They’ll help run and set up the continuous integration (or similar) server but, more specifically, they’ll set up the testing instances that their application runs on and then, eventually, the deployment instances. They’ll integrate Git post-commit hooks into the application and run the tests (either in Node.js and PhantomJS or against something like Sauce Labs, Testling or BrowserStack) before anything gets merged into the master. They’ll need to make sure that those servers can take the raw code and, with a few commands, build up the resulting application.

    This is where many people use Grunt these days. With a quick grunt build, these machines could be serving the built version of an application in order to enable proper testing environments. The front-end operations engineer would be in charge of much that’s behind that command as well. grunt build could call out to RequireJS’ r.js build tool, or a Browserify process, or it could simply minify and concatenate a list of files in order. It would also do similar things to the CSS (or your favorite preprocessed CSS dialect), in addition to crushing images, building sprites and reducing requests in any other way necessary or possible.

    Front-end operations engineers would make sure that all of this stuff works on people’s local machines. A quick grunt test should be able to build everything locally, serve it and test it (likely with some WebDriver API-compatible server). They’d make sure that team members have the power to push their applications into the continuous integration environment and test them there. And they’d remove single points of failure from deployment (GitHub being down during launch wouldn’t scare them).

    They’d facilitate internal deployments of feature branches and future release branches. They’d make sure that the quality assurance team has an easy time of testing anything and that the managers have an easy time of demoing things that aren’t ready.

    They’d help build multiple versions of an application to best suit each of their core sets of users. This could mean builds for mobile or for old versions of Internet Explorer, but all of it should be relatively transparent to those who are programming against those feature, browser or device tests.

    They’d facilitate the process of taking a release, building it, uploading it to a static edge-cached content delivery network, and flipping the switch to make it live. And they’d have a documented and fast roll-back mechanism in place.

    Perhaps most importantly, they’d automate everything.

    front-end ops start image_mini
    (Image credits: Rube Goldberg)

    Tracking Speed

    The metric by which a front-end operations engineer would be judged is speed: the speed of the application, the speed of the tests, of the builds and deployment, and the speed at which other teammates understand the operational process.

    A front-end operations engineer would live in a dashboard that feeds them data. Data is king when it comes to speed. This dashboard would integrate as much of it as possible. Most importantly, it would constantly be running the team’s app in multiple browsers and tracking all important metrics of speed. This space currently doesn’t have a ton of options, so they’d likely set up a private cloud of WebPageTest instances. They’d put them in multiple zones around the world and just run them non-stop.

    They’d run against production servers and new commits and pull requests and anything they can get their hands on. At any given point, they’d be able to tell when, where, and what the surrounding circumstances were behind a slow-down. A decrease in speed would be directly correlated to some change, whether a new server, a diff of code, a dependency or third-party outage, or something similar.

    They’d have a chart that graphs the number of HTTP requests on load. They’d also have a chart that tells them the Gzip’ed and minified payload of JavaScript, CSS and images that are delivered on load. And they’d also go crazy and have the unGzip’ed payload of JavaScript so that they can measure the effect of code parsing, because they know how important it can be on mobile. They’d instrument tools like mod_pagespeed and nginx_pagespeed to catch any mistakes that fall through the cracks.

    They’d be masters of the latest development and measurement tools. They’d read flame graphs and heap snapshots of their apps from their development tools (in each browser that has them). They’d measure frames per second on scrolling and animations, prevent layout thrashing, build memory profiles, and keep a constant eye on compositing, rendering and the overall visual performance of the application. They’d do all of this for desktop and mobile devices, and they’d track trends in all of these areas.

    They’d religiously parallelize tasks. They’d track the application via waterfalls and .har data to make sure that all serial operations are necessary or intentional.

    They’d chart the average run time of the tests, builds and deploys. And they’d fight to keep them low. They’d chart their external dependencies in size and speed. They may not have control over slow API requests, but they’d want to be able to point to the reasons why their numbers are increasing.

    They’d set an alarm if any of these numbers rose above an acceptable limit.

    Monitoring Errors And Logs

    Managing logging is a critical job of a normal operations engineer. The data that is generated from running an application is vital to understanding where things go wrong in the real world. A front-end operations engineer would also instrument tools and code that allow the same level of introspection on the client side.

    This would often manifest itself as an analytics tool. Application engineers would be encouraged to log important events and any errors at certain levels to a logging service. These would be appropriately filtered and batched on the client and sent back as events to an internal or external analytics-style provider. The engineer would have enough information to identify the circumstances, such as browser name and version, application deployment version, screen size and perhaps a bit of other data. (Though they’d want to avoid storing personally identifiable information here.)

    Logging stack traces can be very helpful in browsers that support them. You can integrate third-party services that do this for you.

    The front-end operations engineer would encourage a very small tolerance for errors. Any error that happened would be investigated and either fixed or logged differently. With the data that comes back, they should be able to visualize groups of errors by browser or by state information from the application. A threshold of errors would be allowed to occur, and when that is passed, engineers would be notified. Severities would be assigned, and people would be responsible for getting patches out or rolling back as necessary (with quick patches being heavily favored to roll backs).

    Much like today’s operations people focus on the security of the systems they manage, a front-end operations engineer would have probes for XSS vulnerabilities and would constantly be looking for holes in the app (along with the quality assurance team).

    A front-end operations engineer would have an up-to-date picture of the state of the application in production. This is challenging in the front-end world, because your application doesn’t run on your machines — but that makes it even more necessary.

    Keeping Things Fresh and Stable

    My favorite thing that good operations people who I’ve worked with in the past were really good at was keeping things up to date. For some applications, stability and security are so deeply necessary that caution is the larger priority; but in most cases, failure to keep dependencies and environments up to date is what causes applications to get stale over time. We’ve all worked on a project that’s four years old where all of the tools are very old versions of the ones we know, and getting good performance out of it is impossible.

    A front-end operations engineer would be effective at keeping dependencies up to date and at removing cruft in systems. When the next version of jQuery is released, they’d use their skills to switch out the dependency in the application to work with the new version and then test it to validate the change. They’d keep Grunt up to date (and Node.js along with it). When WebP becomes viable, they’d automate moving the application’s images over to that format.

    They’d work closely with the more architecture-oriented application engineers to make sure that the entire system still feels viable and is not lagging behind in any one area. They would keep on top of this stuff as often as possible. Updating a dependency here and there as you build is far easier than having a big “Update Everything” day. It encourages application developers to loosely couple dependencies and to build good, consistent interfaces for their own modules.

    A front-end operations engineer makes it viable and fun to work on a project long after it’s new.

    The Future

    I’m sure plenty of commenters will tell me that these tasks have been going on for years, and plenty will tell me that they should be the concern of all developers on a team. I would agree with both statements. I am not introducing new concepts; I’m compiling tasks we’ve all been doing for years and giving them a name. I think this will help us build better tools and document better processes in the future.

    The addition of this role to a team doesn’t absolve the other members of performance responsibilities. It’s just that right now, front-end operations are no one’s explicit priority on most of the teams that I’ve encountered, and because of that, they often get skipped in crunch time. I think there’s enough to be done, especially in the configuration and monitoring of these tools, outside of the normal job of a front-end engineer, to justify this role.

    Most importantly, regardless of whether a new job comes from these tasks, or whether we solve the problem in a different way, we do all need to be conscious of the importance of solving these problems in some way. You simply can’t ignore them and still achieve reliable, robust, high-experience applications. Addressing these concerns is critical to the stability and longevity of our applications and to the happiness of programmers and users.

    If we build with that in mind, it helps the Web win, and we all want the Web to win.

    (al) (il) (ea)

    © Alex Sexton for Smashing Magazine, 2013.

  • Gone In 60 Frames Per Second: A Pinterest Paint Performance Case Study


    Today we’ll discuss how to improve the paint performance of your websites and Web apps. This is an area that we Web developers have only recently started looking at more closely, and it’s important because it could have an impact on your user engagement and user experience.

    Frame Rate Applies To The Web, Too

    Frame rate is the rate at which a device produces consecutive images to the screen. A low frames per second (FPS) means that individual frames can be made out by the eye. A high FPS gives users a more responsive feel. You’re probably used to this concept from the world of gaming, but it applies to the Web, too.

    Long image decoding, unnecessary image resizing, heavy animation and data processing can all lead to dropped frames, which reduces the frame rate, resulting in janky pages. We’ll explain what exactly we mean by “jank” shortly.

    Why Care About Frame Rate?

    Smooth, high frame rates drive user engagement and can affect how much users interact with your website or app.

    At EdgeConf earlier this year, Facebook confirmed this when it mentioned that in an A/B test, it slowed down scrolling from 60 FPS to 30 FPS, causing engagement to collapse. That said, if you can’t do high frame rates and 60 FPS is out of reach, then you’d at least want something smooth. If you’re doing your own animation, this is one benefit of using requestAnimationFrame: the browser can dynamically adjust to keep the frame rate normal.

    In cases where you’re concerned about scrolling, the browser can manage the frame rate for you. But if you introduce a large amount of jank, then it won’t be able to do as good a job. So, try to avoid big hitches, such as long paints, long JavaScript execution times, long anything.

    Don’t Guess It, Test It!

    Before getting started, we need to step back and look at our approach. We all want our websites and apps to run more quickly. In fact, we’re arguably paid to write code that runs not only correctly, but quickly. As busy developers with deadlines, we find it very easy to rely on snippets of advice that we’ve read or heard. Problems arise when we do that, though, because the internals of browsers change very rapidly, and something that’s slow today could be quick tomorrow.

    Another point to remember is that your app or website is unique, and, therefore, the performance issues you face will depend heavily on what you’re building. Optimizing a game is a very different beast to optimizing an app that users will have open for 200+ hours. If it’s a game, then you’ll likely need to focus your attention on the main loop and heavily optimize the chunk of code that is going to run every frame. With a DOM-heavy application, the memory usage might be the biggest performance bottleneck.

    Your best option is to learn how to measure your application and understand what the code is doing. That way, when browsers change, you will still be clear about what matters to you and your team and will be able to make informed decisions. So, no matter what, don’t guess it, test it!

    We’re going to discuss how to measure frame rate and paint performance shortly, so hold onto your seats!

    Note: Some of the tools mentioned in this article require Chrome Canary, with the “Developer Tools experiments” enabled in about:flags. (We — Addy Osmani and Paul Lewis — are engineers on the Developer Relations team at Chrome.)

    Case Study: Pinterest

    The other day we were on Pinterest, trying to find some ponies to add to our pony board (Addy loves ponies!). So, we went over to the Pinterest feed and started scrolling through, looking for some ponies to add.

    Screen Shot 2013-03-25 at 14.30.57-500
    Addy adding some ponies to his Pinterest board, as one does. Larger view.

    Jank Affects User Experience

    The first thing we noticed as we scrolled was that scrolling on this page doesn’t perform very well — scrolling up and down takes effort, and the experience just feels sluggish. When they come up against this, users get frustrated, which means they’re more likely to leave. Of course, this is the last thing we want them to do!

    Screen Shot 2013-03-25 at 14.31.27-500
    Pinterest showing a performance bottleneck when a user scrolls. Larger view.

    This break in consistent frame rate is something the Chrome team calls “jank,” and we’re not sure what’s causing it here. You can actually notice some of the frames being drawn as we scroll. But let’s visualize it! We’re going to open up Frames mode and show what slow looks like there in just a moment.

    Note: What we’re really looking for is a consistently high FPS, ideally matching the refresh rate of the screen. In many cases, this will be 60 FPS, but it’s not guaranteed, so check the devices you’re targeting.

    Now, as JavaScript developers, our first instinct is to suspect a memory leak as being the cause. Perhaps some objects are being held around after a round of garbage collection. The reality, however, is that very often these days JavaScript is not a bottleneck. Our major performance problems come down to slow painting and rendering times. The DOM needs to be turned into pixels on the screen, and a lot of paint work when the user scrolls could result in a lot of slowing down.

    Note: HTML5 Rocks specifically discusses some of the causes of slow scrolling. If you think you’re running into this problem, it’s worth a read.

    Measuring Paint Performance

    Frame Rate

    We suspect that something on this page is affecting the frame rate. So, let’s go open up Chrome’s Developer Tools and head to the “Timeline” and “Frames” mode to record a new session. We’ll click the record button and start scrolling the page the way a normal user would. Now, to simulate a few minutes of usage, we’re going to scroll just a little faster.

    Screen Shot 2013-05-15 at 17.57.48-500
    Using Chrome’s Developer Tools to profile scrolling interactions. Larger view.

    Up, down, up, down. What you’ll notice now in the summary view up at the top is a lot of purple and green, corresponding to painting and rendering times. Let’s stop recording for now. As we flip through these various frames, we see some pretty hefty “Recalculate Styles” and a lot of “Layout.”

    If you look at the legend to the very right, you’ll see that we’ve actually blown our budget of 60 FPS, and we’re not even hitting 30 FPS either in many cases. It’s just performing quite poorly. Now, each of these bars in the summary view correspond to one frame — i.e. all of the work that Chrome has to do in order to be able to draw an app to the screen.

    Chrome’s Developer Tools showing a long paint time. Larger view.

    Frame Budget

    If you’re targeting 60 FPS, which is generally the optimal number of frames to target these days, then to match the refresh rate of the devices we commonly use, you’ll have a 16.7-millisecond budget in which to complete everything — JavaScript, layout, image decoding and resizing, painting, compositing — everything.

    Note: A constant frame rate is our ideal here. If you can’t hit 60 FPS for whatever reason, then you’re likely better off targeting 30 FPS, rather than allowing a variable frame rate between 30 and 60 FPS. In practice, this can be challenging to code because when the JavaScript finishes executing, all of the layout, paint and compositing work still has to be done, and predicting that ahead of time is very difficult. In any case, whatever your frame rate, ensure that it is consistent and doesn’t fluctuate (which would appear as stuttering).

    If you’re aiming for low-end devices, such as mobile phones, then that frame budget of 16 milliseconds is really more like 8 to 10 milliseconds. This could be true on desktop as well, where your frame budget might be lowered as a result of miscellaneous browser processes. If you blow this budget, you will miss frames and see jank on the page. So, you likely have somewhere nearer 8 to 10 milliseconds, but be sure to test the devices you’re supporting to get a realistic idea of your budget.

    Screen Shot 2013-03-25 at 14.34.26-500
    An extremely costly layout operation of over 500 milliseconds. Larger view.

    Note: We’ve also got an article on how to use the Chrome Developer Tools to find and fix rendering performance issues that focuses more on the timeline.

    Going back to scrolling, we have a sneaking suspicion that a number of unnecessary repaints are occurring on this page with onscroll.

    One common mistake is to stuff just way too much JavaScript into the onscroll handlers of a page — making it difficult to meet the frame budget at all. Aligning the work to the rendering pipeline (for example, by placing it in requestAnimationFrame) gives you a little more headroom, but you still have only those few milliseconds in which to get everything done.

    The best thing you can do is just capture values such as scrollTop in your scroll handlers, and then use the most recent value inside a requestAnimationFrame callback.

    Paint Rectangles

    Let’s go back to Developer Tools → Settings and enable “Show paint rectangles.” This visualizes the areas of the screen that are being painted with a nice red highlight. Now look at what happens as we scroll through Pinterest.

    Screen Shot 2013-03-25 at 14.35.17-500
    Enabling Chrome Developer Tools’ “Paint Rectangles” feature. Larger view.

    Every few milliseconds, we experience a big bright flash of red across the entire screen. There seems to be a paint of the whole screen every time we scroll, which is potentially very expensive. What we want to see is the browser just painting what is new to the page — so, typically just the bottom or top of the page as it gets scrolled into view. The cause of this issue seems to be the little “scroll to top” button in the lower-right corner. As the user scrolls, the fixed header at the top needs to be repainted, but so does the button. The way that Chrome deals with this is to create a union of the two areas that need to be repainted.

    Screen Shot 2013-05-15 at 19.00.12-500
    Chrome shows freshly painted areas with a red box. Larger view.

    In this case, there is a rectangle from the top left to top right, but not very tall, plus a rectangle in the lower-right corner. This leaves us with a rectangle from the top left to bottom right, which is essentially the whole screen! If you inspect the button element in Developer Tools and either hide it (using the H key) or delete it and then scroll again, you will see that only the header area is repainted. The way to solve this particular problem is to move the scroll button to its own layer so that it doesn’t get unioned with the header. This essentially isolates the button so that it can be composited on top of the rest of the page. But we’ll talk about layers and compositing in more detail in a little bit.

    The next thing we notice has to do with hovering. When we hover over a pin, Pinterest paints an action bar containing “Repin, comment and like” buttons — let’s call this the action bar. When we hover over a single pin, it paints not just the bar but also the elements underlying it. Painting should happen only on those elements that you expect to change visually.

    Screen Shot 2013-03-25 at 14.35.46-500
    A cause for concern: full-screen flashes of red indicate a lot of painting. Larger view.

    There’s another interesting thing about scrolling here. Let’s keep our cursor hovered over this pin and start scrolling the page again.

    Every time we scroll through a new row of images, this action bar gets painted on yet another pin, even though we don’t mean to hover over it. This comes down more to UX than anything else, but scrolling performance in this case might be more important than the hover effect during scrolling. Hovering amplifies jank during scrolling because the browser essentially pauses to go off and paint the effect (the same is true when we roll out of the element!). One option here is to use a setTimeout with a delay to ensure that the bar is painted only when the user really intends to use it, an approach we covered in “Avoiding Unnecessary Paints.” A more aggressive approach would be to measure the mouseenter or the mouse’s trajectory before enabling hover behaviors. While this measure might seem rather extreme, remember that we are trying to avoid unnecessary paints at all costs, especially when the user is scrolling.

    Overall Paint Cost

    We now have a really great workflow for looking at the overall cost of painting on a page; go back into Developer Tools and “Enable continuous page repainting.” This feature will constantly paint to your screen so that you can find out what elements have costly paint times. You’ll get this really nice black box in the top corner that summarizes paint times, with the minimum and maximum also displayed.

    Chrome’s “Continuous Page Repainting” mode helps you to assess the overall cost of a page. Larger view.

    Let’s head back to the “Elements” panel. Here, we can select a node and just use the keyboard to walk the DOM tree. If we suspect that an element has an expensive paint, we can use the H shortcut key (something recently added to Chrome) to toggle visibility on that element. Using the continuous paint box, we can instantly see whether this has a positive effect on our pages’ paint times. We should expect it to in many cases, because if we hide an element, we should expect a corresponding reduction in paint times. But by doing this, we might see one element that is especially expensive, which would bear further scrutiny!

    Screen Shot 2013-06-10 at 09.46.31_500_mini
    The “Continuous Page Repainting” chart showing the time taken to paint the page.

    For Pinterest’s website, we can do it to the categories bar or to the header, and, as you’d expect, because we don’t have to paint these elements at all, we see a drop in the time it takes to paint to the screen. If we want even more detailed insight, we can go right back to the timeline and record a new session to measure the impact. Isn’t that great? Now, while this workflow should work great for most pages, there might be times when it isn’t as useful. In Pinterest’s case, the pins are actually quite deeply nested in the page, making it hard for us to measure paint times in this workflow.

    Luckily, we can still get some good mileage by selecting an element (such as a pin here), going to the “Styles” panel and looking at what CSS styles are being used. We can toggle properties on and off to see how they effect the paint times. This gives us much finer-grained insight into the paint profile of the page.

    Here, we see that Pinterest is using box-shadow on these pins. We’ve optimized the performance of box-shadow in Chrome over the past two years, but in combination with other styles and when heavily used, it could cause a bottleneck, so it’s worth looking at.

    Pinterest has reduced continuous paint mode times by 40% by moving box-shadow to a separate element that doesn’t have border-radius. The side effect is slightly fuzzy-looking corners; however, it is barely noticeable due to the color scheme and the low border-radius values.

    Note: You can read more about this topic in “CSS Paint Times and Page Render Weight.”

    Screen Shot 2013-03-25 at 15.47.40-500
    Toggling styles to measure their effect on page-rendering weight. Larger view.

    Let’s disable box-shadow to see whether it makes a difference. As you can see, it’s no longer visible on any of the pins. So, let’s go back to the timeline and record a new session in which we scroll the same way as we did before (up and down, up and down, up and down). We’re getting closer to 60 FPS now, and that’s just from one change.

    Public service announcement: We’re absolutely not saying don’t use box-shadow — by all means, do! Just make sure that if you have a performance problem, measure correctly to find out what your own bottlenecks are. Always measure! Your website or application is unique, as will any performance bottleneck be. Browser internals change almost daily, so measuring is the smartest way to stay up to date on the changes, and Chrome’s Developer Tools makes this really easy to do.

    Screen Shot 2013-03-25 at 15.50.25-500
    Using Chrome Developer Tools to profile is the best way to track browser performance changes. Larger view.

    Note: Eberhard Grather recently wrote a detailed post on “Profiling Long Paint Times With DevTools’ Continuous Painting Mode,” which you should spend some quality time with.

    Another thing we noticed is that if you click on the “Repin” button, do you see the animated effect and the lightbox being painted? There’s a big red flash of repaint in the background. It’s not clear from the tooling if the paint is the white cover or some other affected being area. Be sure to double check that the paint rectangles correspond to the element or elements that you think are being repainted, and not just what it looks like. In this case, it looks like the whole screen is being repainted, but it could well be just the white cover, which might not be all that expensive. It’s nuanced; the important thing is to understand what you’re seeing and why.

    Hardware Compositing (GPU Acceleration)

    The last thing we’re going to look at on Pinterest is GPU acceleration. In the past, Web browsers have relied pretty heavily on the CPU to render pages. This involved two things: firstly, painting elements into a bunch of textures, called layers; and secondly, compositing all of those layers together to the final picture seen on screen.

    Over the past few years, however, we’ve found that getting the GPU involved in the compositing process can lead to some significant speeding up. The premise is that, while the textures are still painted on the CPU, they can be uploaded to the GPU for compositing. Assuming that all we do on future frames is move elements around (using CSS transitions or animations) or change their opacity, we simply provide these changes to the GPU and it takes care of the rest. We essentially avoid having to give the GPU any new graphics; rather, we just ask it to move existing ones around. This is something that the GPU is exceptionally quick at doing, thus improving performance overall.

    There is no guarantee that this hardware compositing will be available and enabled on a given platform, but if it is available the first time you use, say, a 3D transform on an element, then it will be enabled in Chrome. Many developers use the translateZ hack to do just that. The other side effect of using this hack is that the element in question will get its own layer, which may or may not be what you want. It can be very useful to effectively isolate an element so that it doesn’t affect others as and when it gets repainted. It’s worth remembering that the uploading of these textures from system memory to the video memory is not necessarily very quick. The more layers you have, the more textures need to be uploaded and the more layers that will need to be managed, so it’s best not to overdo it.

    Note: Tom Wiltzius has written about the layer model in Chrome, which is a relevant read if you are interested in understanding how compositing works behind the scenes. Paul has also written a post about the translateZ hack and how to make sure you’re using it in the right ways.

    Another great setting in Developer Tools that can help here is “Show composited layer borders.” This feature will give you insight into those DOM elements that are being manipulated at the GPU level.

    Switching on composited layer borders will indicate Chrome’s rendering layers. Larger view.

    If an element is taking advantage of the GPU acceleration, you’ll see an orange border around it with this on. Now as we scroll through, we don’t really see any use of composited layers on this page — not when we click “Scroll to top” or otherwise.

    Chrome is getting better at automatically handling layer promotion in the background; but, as mentioned, developers sometimes use the translateZ hack to create a composited layer. Below is Pinterest’s feed with translateZ(0) applied to all pins. It’s not hitting 60 FPS, but it is getting closer to a consistent 30 FPS on desktop, which is actually not bad.

    Screen Shot 2013-05-15 at 19.03.13-500
    Using the translateZ(0) hack on all Pinterest pins. Note the orange borders. Larger view.

    Remember to test on both desktop and mobile, though; their performance characteristics vary wildly. Use the timeline in both, and watch your paint time chart in Continuous Paint mode to evaluate how fast you’re busting your budget.

    Again, don’t use this hack on every element on the page — it might pass muster on desktop, but it won’t on mobile. The reason is that there is increased video memory usage and an increased layer management cost, both of which could have a negative impact on performance. Instead, use hardware compositing only to isolate elements where the paint cost is measurably high.

    Note: In the WebKit nightlies, the Web Inspector now also gives you the reasons for layers being composited. To enable this, switch off the “Use WebKit Web Inspector” option and you’ll get the front end with this feature in there. Switch it on using the “Layers” button.

    A Find-and-Fix Workflow

    Now that we’ve concluded our Pinterest case study, what about the workflow for diagnosing and addressing your own paint problems?

    Finding the Problem

    • Make sure you’re in “Incognito” mode. Extensions and apps can skew the figures that are reported when profiling performance.
    • Open the page and the Developer Tools.
    • In the timeline, record and interact with your page.
    • Check for frames that go over budget (i.e. over 60 FPS).
    • If you’re close to budget, then you’re likely way over the budget on mobile.
    • Check the cause of the jank. Long paint? CSS layout? JavaScript?

    Screen Shot 2013-05-15 at 19.36.22-500
    Spend some quality time with Frame mode in Chrome Developer Tools to understand your website’s runtime profile. Larger view.

    Fixing the Problem

    • Go to “Settings” and enable “Continuous Page Repainting.”
    • In the “Elements” panel, hide anything non-essential using the hide (H) shortcut.
    • Walk through the DOM tree, hiding elements and checking the FPS in the timeline.
    • See which element(s) are causing long paints.
    • Uncheck styles that could affect paint time, and track the FPS.
    • Continue until you’ve located the elements and styles responsible for the slow-down.

    Switch on extra Developer Tools features for more insight. Larger view.

    What About Other Browsers?

    Although at the time of writing, Chrome has the best tools to profile paint performance, we strongly recommend testing and measuring your pages in other browsers to get a feel for what your own users might experience (where feasible). Performance can vary massively between them, and a performance smell in one browser might not be present in another.

    As we said earlier, don’t guess it, test it! Measure for yourself, understand the abstractions, know your browser’s internals. In time, we hope that the cross- browser tooling for this area improves so that developers can get an accurate picture of rendering performance, regardless of the browser being used.


    Performance is important. Not all machines are created equal, and the fast machines that developers work on might not have the performance problems encountered on the devices of real users. Frame rate in particular can have a big impact on engagement and, consequently, on a project’s success. Luckily, a lot of great tools out there can help with that.

    Be sure to measure paint performance on both desktop and mobile. If all goes well, your users will end up with snappier, more silky-smooth experiences, regardless of the device they’re using.

    Further Reading

    About the Authors

    Addy Osmani and Paul Lewis are engineers on the Developer Relations team at Chrome, with a focus on tooling and rendering performance, respectively. When they’re not causing trouble, they have a passion for helping developers build snappy, fluid experiences on the Web.


    © Addy Osmani for Smashing Magazine, 2013.

Digest powered by RSS Digest

Share this post

  • Subscribe to our RSS feed
  • Share this post on Delicious
  • StumbleUpon this post
  • Share this post on Digg
  • Tweet about this post
  • Share this post on Mixx
  • Share this post on Technorati
  • Share this post on Facebook
  • Share this post on NewsVine
  • Share this post on Reddit
  • Share this post on Google
  • Share this post on LinkedIn

There are no responses so far.

Leave your response