Category Archives: eLearning

How to Apply Design Thinking to L&D (Part 1)

To create meaningful learning experiences, it helps to have a deep understanding of your users and their lives. Most of us rely on the ADDIE process to gain this understanding. ADDIE has served us well, however, there are three key aspects of what is referred to as Design Thinking that will enable you to gain an even deeper, richer understanding of your user’s performance problems:

  • Engaging in radical collaboration with your internal team, stakeholders and users
  • Iterating quick solutions internally and externally
  • Designing, developing and testing smaller prototypes “in the field” more often

Design Thinking is a 5-step process to help you design more  useful, human-centered learning experiences. Creators Tim Kelly and David Brown of IDEO define Design Thinking as:

a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.

Design Thinking has been primarily used in product design, but as you’ll see in this series of posts, there are parts that can enhance the process of training design.

Design Thinking consists of five “modes” or steps:

  1. Empathize
  2. Define
  3. Ideate
  4. Prototype
  5. Test

As we explore each mode, you’ll find several similarities to ADDIE. Design Thinking also has an inherent connection to the Agile methodology, which you can carefully leverage throughout your process transformation. Agile methods are great for software development, but tread carefully when applying it holistically to L&D. There are some aspects of training that’s difficult to iterate.

The main idea I want to stress when contemplating Design Thinking is for you to focus on the idea of human-centered design. This graphic shows you the three key components in human-centered design:

  • Business viability
  • People
  • Technology
Human-centeredDesign
The Human-Centered Design Process

You can see that they all intersect. You start with the needs of your audiences. Once you understand their needs, you can envision the opportunities your solution can offer, and then you can respond. Design Thinking as a practice evolved from human-centered design.

Mode 1: Empathy

When thinking about Empathy, let’s be clear: empathy is not sympathy! Empathy is placing yourself in another person’s shoes and feeling what the other person is feeling. You seek empathy to discover people’s explicit and implicit needs so that you can meet those needs through your design solutions. Seeking empathy for those you support fosters in you a personal commitment to their success.

To gain empathy, focus on real people and their stories. Not use cases, surveys, or inauthentic personas – but real people YOU have observed in their context. A lot of this mode is about tearing down the barrier between you and the performer. Too often, we sit in our cubicles and gather information from secondary sources on what our audiences’ lives are like. If I’m chartered to design a training solution for FedEx drivers, I will need to go on a ride-along, and not for just an hour. I’ll need to ride-along for the whole day, and maybe longer. I need to observe what their life is like across the workday, and experience multiple situations where their knowledge and skills are put into play as they perform their job. If you interview a person outside the context of real-time observation, you will get a “flavored response”. People rarely lie, they just don’t always tell the truth about what they do. You want to capture their real-life situations so your design can be informed by the truth in their tasks.

Gaining empathy involves these steps:

  • Observing
    • View users and their behavior in the context of their work.
  • Engaging
    • Interact with and interview users through both scheduled and unscheduled encounters.
  • Immersing
    • Experience what your user experiences.

The Empathy mode is not entirely different from the Analysis phase in ADDIE. However, the basic tenet of Design Thinking requires you to experience your user’s situation as deeply as you can. This leads to primary research – and is more useful for you in the long run. You won’t just rely on secondary information to inform your design solution.

What resonates the most with you about the difference between Empathy in Design Thinking and Analysis in ADDIE?

In my next post, I’ll discuss the Define mode.

Applying Persuasive Expression in Learning Design

Research shows that people often perform tasks and make decisions after only minimal information processing; however, in persuasive settings, such as online learning, most cognitive effort is first placed on the media presented. The human brain comprehends images almost instantaneously, as opposed to text, which prompts the brain to systematically process its meaning first, which is often a slower process than with imagery. Images also ignite a more visceral emotional response, which affords us the ability to motivate the viewer more easily than plain text. Advertising is an industry that primarily relies on the visceral response inherent with imagery to motivate viewers to purchase a product or service.

In online learning, we too often “miss the boat” when using imagery. We tend to rely on imagery as ornamentation to text, possibly because we lack the resources or time to effectively design imagery to convey the entire and/or appropriate message. In lieu of that, we often highlight specific portions of available stock imagery to achieve a desired message (via cropping, selective editing, zooming, etc.).

Ian Bogost has done considerable work in the area of imagery as persuasive rhetoric. He argues that visual fidelity implies authority, and likewise, simplistic or unrefined graphics are often an indication of mediocrity. Consider low-resolution or “poorly designed” product packaging, for example. It often fails to attract the consumer. His argument should resonate for those of us in learning design, simply because we are facing a more media mature audience, and are tasked with attracting, engaging and motivating audiences through a visual medium they have grown accustomed to and often are masters of. Poorly designed or inappropriate imagery will force this audience to exert considerable cognitive effort to process the lower resolution media, or worse, fail to appropriately attract and motivate them at all.

One thought, especially for resource-drained learning functions, is to rely on the heuristic model developed by Dr. Shelly Chaiken. The heuristic model relies on less systematic processing in the persuasive setting, and requires less cognitive effort on the part of the learner. Since heuristic processing is relatively effortless for the learner, it is likely you will have more success in attracting and engaging them than with the systematic model. Source of truth elements are key to the heuristic model. A short video or an interactive content object delivered or designed by a subject matter expert will potentially be more quickly seen as valid on the part of the learner.

For those of us lacking deep-craft skill in media design on our staff, consider reducing the amount of media elements that only serve as ornaments to other elements, and instead, try to rely on the heuristic model to design more valid visual content objects. You do need to always consider the resolution of the media elements you display, but you do less harm with immediate perceived expertise than with ornamentation.

The heuristic model for media design supporting the idea of persuasive rhetoric.
The heuristic model for media design supporting the idea of persuasive rhetoric.

Key Elements of Your Learning Content Strategy, Pt. 2

5958374472_aacefb108a_bIn Part 1 of this three-part post, I discussed content organization and structure when formulating your overall learning content strategy. In this part, I’d like to discuss the role of authoring and delivery platforms, and their impact on your strategic and tactical implementation.

Too often instructional designers move straight to authoring, and begin “assembling” their course right away. This is not surprising, because the businesses we support are often moving at a fast pace, and many of us juggle multiple projects simultaneously all the time. Sometimes, we want to “just get it done”. Working in this manner, however, can create a firehose of course content that can become redundant and lead to fragmentation and loss of productivity for your and your audiences.

Your focus, instead, should be on creating less content — content that is clear, simple, succinct and “elastic” — able to bend to the learner’s context. This content should also be in a format digestible by anyone in your target audience, anywhere, and on whatever device they have with them.

Easier said than done. Especially with looming deadlines always on the horizon. One of the biggest drivers affecting how you design and develop content revolves around your available resources. We can preach all day about the “right thing to do”, but if you don’t have access to much beyond what you can get done yourself, you live with what you have. Like we were told years ago, “You go to war with the army you have.”

This is the primary reason so many L&D teams are creating training in a non-elastic fashion — creating reams of content married to proprietary systems and/or content that is silo’ed away from similar content that already exists. You are only the sum of the skills of your team. Years ago, a typical “CBT” development team consisted of skill-specific resources, including graphic artists for both user interface and production graphic tasks, instructional designers focused on designing content for learning, rich-media experts for animation, rendering, video and audio, and an editor and quality assurance person. Running the show was a dedicated project manager and technical liaison for the implementation. Now, with the proliferation of “rapid authoring tools” many L&D functions are “teams of one” — in many orgs, access to a graphic artist and/or media producer is a luxury that’s just not available. In light of this, and with constrained budgets, there are tweaks you can make to “level up” and start producing training content that gives your audiences what they really need. Before you think I’m about to suggest the latest version of a new authoring tool, step back. I’m not suggesting that at all. An authoring tool is the last component in your toolbelt that you need to worry about. I’d like to hear that you’re close to becoming “tool agnostic” — you want to be flexible enough so that you can pivot and change tools when your needs change. Let’s step back and consider how you can make a change in your overall content strategy to set yourself up for becoming free from the chains of proprietary authoring systems, and move beyond redundant, non-elastic content.

First, it’s important to consider how your team(s) perform their work. You should begin by investigating these processes:

  • System and systemic requirements

    • There is no “holy grail” when it comes to systemic constraints. Each system has its own unique challenges. How your system functions is a design challenge. You need to work within that system to affect the change you’ll need to see so that you can appropriately implement the needed change. You may need to consider changing how you think about what you do, before you can begin to change how the system operates.

  • Development workflows

    • Take a deep-dive into how your team goes about their development process. Pinpoint the areas that provide the most pain. Look at who the main players are in those areas. Do you need different skillsets there? Do you need to add/subtract people from the mix to move things along quicker? What’s working well across the process flow? How do you take what works well and duplicate it at other key points?

  • Content and app versioning control

    • Tricky, yes. But you need to reach higher outside of your silo, and take a look across the landscape of your organization to see where you can leverage other systems and infrastructure to make it easier to create, share and distribute content. There’s probably a lot going on that is duplicative and there’s probably a lot you can easily cut-out. Building relationships across other functions that also generate content can help you to leverage what’s already being created. For the assets your team is creating, conduct the due diligence to appropriately add metadata and versioning to it. Stop the train for a little bit so that you can build a process that will comfortably make your content more usable as you add to it.

  • Modifying and updating content

    • If your team members are working locally on their own hard drives creating content, ask yourself the fundamental question of “why”. The cloud is too convenient and easy to not be leveraging it. At the very least shared content repositories are critical. I urge you to consider a content management system (CMS) — but one that can integrate well into your team’s workflow.

  • Searchability

    • Successful content development relies on the ability to curate “source of truth” content, develop new content, and integrate contextually relevant information to provide deeper meaning. The ability for your team (and, ultimately, your users) to discover and trust content is critical.

  • Metadata

    • I mentioned this above, but I recommend you take the time to create a tagging system that works not only for your team, your organization, but one that works for your company and takes into consideration wider industry-specific

Although you’re an instructional designer, you’re also going to have to wear different hats that span many domains if you want to appropriately establish workable methods for content creation and dissemination. Those may include IT, editorial, social community moderator, even digital curation — which is an entire profession on its own. If you have these available resources, you’re several steps ahead of the game. If not, it’s OK, but you’re no longer going to be a one-trick pony. Oh, and did I forget visual design? Yeah, there’s that. And let us not forget accessibility, multiple devices, and data tracking. Whew. And you get paid how much?

The good thing is, our elders gave us a workable model. It’s called ADDIE. I know a lot of us moan and groan about “old-school ADDIE”, but every profession needs a methodology, a way forward. A framework. Although what you do when “creating learning” often varies, you vacillate between three big buckets:

  • Analysis

  • Design & Development

  • Implementation

Yeah, you really do. Too few of us are not focused on the E (evaluation), and that’s too bad. However, more than likely you have chosen tools and platforms that help you achieve the framework within which you work. Which is probably the ADDI one. Right? You may think you’re Agile. You may think you’re neither. The reality is, what we do is a lot like making biscuits. You pinch them from the flower, one by one, put them in the oven, and wait til they’re edible. It takes what it takes. So you may not sequence your steps the way others do… but there are just certain steps you have to perform. I don’t advocate either ADDIE, Agile, or other methods that consultants or academics come up with. Whatever works for you is good enough. When I do what we do, I create prototypes. I iterate between them, and I try to get feedback and make things better before I go forward with the “final output”. That’s a little bit of Agile sprinkled in. You do what you can do. Regardless of the authoring tool and/or platform, think of this: every deliverable you create encompasses two things: the strategy behind WHY you’re creating it, and the strategy behind making it consumable by those you’re creating it for.

Take those two elements as your foundation, using whatever framework or process you have, and then break down your tool or app into what it does to help you deliver. It may be Microsoft Word for storyboarding (or Google Docs), PowerPoint for prototyping, Lectora for assembly, etc. Focus on what the tool or app brings to the game and leverage its strengths.

Authoring is the act of assembly. You’re bringing together multiple media types into a cohesive experience. Delivery is the act of enabling your audience to consume the experience. Maybe that’s via an LMS? A webserver? Inherent in this duality are your needs and your learner’s needs. It’s a balancing act to preserve a usable experience between the two. Off to the side is the role of the CMS, or the system which serves the content (or makes it available to you). It’s kind of a trifecta if you will. At the end of the day, you want sustainable, flexible content objects that resonate for the businesses you support, while at the same time providing a meaningful learning experience.

Landing on the right combination of tools, apps, and platforms requires removing ambivalence about what you really need to get done, obtaining a deep understanding of the limitations of what you can actually achieve given your constraints (and we all have them), and recognizing the basics of how each element in the framework you work within functions.

How to Become an eLearning Pro

Recently, Christopher Pappas of the eLearning Industry Network asked me to contribute to an eBook he was putting together about how to become an eLearning Pro. The tips and tricks gathered here include advice from some of the industry's top practitioners. I am humbled to be included in this motley crue. The eBook is free, and I highly recommend it. Some of the more salient tips include:

  • Cammy Bean discussing her list of top books and references that apply to eLearning
  • Connie Malamed listing the top areas of interest for new "learning pros" to focus on
  • And Joel Gardner's three strategies for becoming an eLearning pro

Check out the eBook here.

When Multiple Choice is Not Enough (Anatomy of a Bad Assessment Item)

MC-1

One of the traditional tasks in instructional design is the creation of “knowledge checks” — standard quiz items usually placed in-context with content. The thought behind these types of assessment items is to provide the learner an opportunity to self-assess in sequence, immediately after information acquisition. A common type of item is multiple choice. Multiple choice items can work if you have a great deal of time and understanding of the material, and you are able to construct items that probe higher levels of reasoning. However, too often the reality is that instructional designers without domain expertise write multiple choice items that are irrelevant to real learning, and in many situations, cause more harm than good. Let’s dissect a multiple choice item and probe a bit deeper on why they can be dangerous, and how to make them better.

MC-2

The inherent issue with multiple choice items is the fact that they only test recall, and, in this case, the recall is requested seconds or minutes after the content is provided. And, to impede the process even further, the learner can’t progress until they answer. In this item, the learner is exposed to wrong responses as well as correct ones. The standard four distractors are offered with radio-style buttons: one correct, and three incorrect. The learner scans the list, and makes their choice by clicking in the radio button and then clicking a “Submit” button to receive a response, such as this one:

MC-3

In this instance, feedback is displayed immediately because the instructional designer has chosen to allow only one try for the item. On incorrect, a “Sorry, that’s incorrect…” statement appears next to a large red “X”. Visual metaphors are strong, and in this case the type of visual reinforcement and the placement is critical to how useful the item is for learning. When the learner attempts to recall information supported by this item, an unhelpful visual may appear.

MC-4

For this to be good instructional design the red “X” should be over (or beside) the wrong selected radio button, and the correct answer is should be highlighted. The learner never has a correct visual to overlay the incorrect visual. The learner leaves with a powerful, incorrect visual, instead of a bolder, corrective one. Proper feedback is critical as well. Sometimes learners are just told that the answer is wrong, without being given the correct feedback. Then they return to the item, knowing that whatever thought process or strategy they used was wrong, and can get stuck, trying to remember what their wrong response was, trying to choose the correct answer. If they have to repeat the process multiple times, they do not come away with a strong sense of knowing the correct answer, they instead feel relief that they finally guessed the right answer and were able to progress. In the example above, the incorrect feedback statement is close to the selection, and the correct choice is highlighted with a feedback confirmation next to it on the screen. Additionally, more feedback may be appropriate adding context.

When creating multiple choice items, ask yourself these questions:

  • Is your goal just to have learners take courses, or are you trying to ensure they learn something?
  • If you need someone to demonstrate mastery of procedures, multiple items may be an inappropriate mechanism for them to demonstrate that they can perform.

If you decide to use multiple choice items to promote learning, here are our recommendations:

  • Provide the correct answer after a wrong response.
  • Supplant the incorrect visual, with a bolder visual of the correct response.
  • Use a tracking system which requires the learner to answer a certain percentage (or all) questions correctly.
  • Provide personalized, meaningful feedback in-place on-screen whenever possible.

We all think knowledge checks are innocent enough — important segues in the content sequence — little “breaks” that let the learner pause and think about what they just consumed. This can be a good thing as long as you make sure you’re putting forth the appropriate test item for both the learner and the business. At the end of the day, you don’t want to waste the learner’s time, and you really don’t want to spend precious resources designing learning experiences that don’t have a demonstrable return to the business.

This post was cowritten by Dolly Joseph of Wiggle Learning.