PROFESSIONAL POSTS
Instructional Design
Learners in today's fast-paced global climate require upskilling regularly to meet the needs of the organization and to grow personally. Every learner has their own unique method for how they prefer to acquire knowledge which often uses some sort of blended learning approach.
Blended learning typically leverages a variety of methods to meet the needs of learners, including instructor-led, and eLearning, either synchronous or asynchronous. Whether using instructor-led, eLearning, or a blended approach, an instructional design model is typically used to maximize efficiency and ensure engaging learning experiences.
Instructional design, also known as Instructional Systems Design, is the systematic design practice, of digital and physical resources, used to create experiences for learners that result in the acquisition and application of knowledge and skills.
Fit for Purpose Instructional Frameworks
Measuring Training Effectiveness
I wanted to take a moment to discuss one of the key themes that is top of mind for most organizations today as more and more funds are being funneled into learning and development. Often, I am asked, "How do we prove that the funds we are investing in training are accomplishing what we intended or set out to do as an organization?" Measuring the effectiveness of training can be challenging but there are several training evaluation models that can assist by providing a framework and are often used throughout the Learning and Development industry. Some of these models include, the Phillips ROI Model, Kaufman’s Five Levels of Evaluation, Anderson’s Model of Learning Evaluation, and Summative vs. Formative Evaluation, to name a few. One model used widely in Learning and Development is Kirkpatrick's Four-Level Training Evaluation Model. Reaction, Learning, Behavior, and Results make up the four levels.
Although there are four levels to this model, implementing at all levels typically is not considered practical or cost effective. Most companies reserve Level 3 and 4 for large company wide initiatives, while choosing to implement Key Performance Indicators (KPIs) associated with levels 1 and 2, Reaction and Learning, respectively. A balance of quantitative and qualitative metrics at Levels 1 and 2 are essential to mine Voice of the Customer (VoC) insight, whether from internal learners or external customers.
At the reaction level (1), data points should include participation rates, completion rates, satisfaction rates (CSAT), and Net Promoter Scores (NPS), for example. Typically, NPS is reserved for a longer-term view of training, while more often CSAT is used at the micro level to assess training interactions. A qualitative question at level 1 might be, "What topic/section did you find the most valuable?" This provides invaluable insight into the learners' reactions but also provides an opportunity for learners to reflect on what they have learned. At Level 2, Learning, data points should include pre- and post-assessment scores, formative quizzes, and pass rates, for example. Pre-assessments offer an opportunity for learners to test out of content they already know, while also providing stakeholders a glimpse at what learners have learned post-training by comparing pre-assessment scores to post-training assessment scores. A qualitative question at this level might be, "What did you learn from your training to help you perform at a higher level in your role?"
Metrics to show training effectiveness may also include a much wider array of data points, including for example, likes/dislikes, engagement rates; time spent learning, learner drop off rates, and badges earned through gamifying learning. Typically, these data points are housed within the Learning Management System (LMS) and ultimately provide additional context to stakeholders. A Customer Effort Score (CES) might also be employed to target the ease with which trainees are able to learn. For example, points are awarded for the ease of signing on to the LMS, navigating assigned programs, and courses, accessing additional supporting resources, completing the overall training pathway, or asking questions and receiving answers from facilitators.
Additionally, using a Learner Record Store (LRS) to collect various experiences using Experience Application Program Interface (xAPI) data points can provide a much broader view of all learning. This additional metric collects data on a wide range of experiences that a learner has online. Activities are recorded using simple secure statements in the form of "Noun, verb, object" or "I did this." These additional data points can also help measure the overall effectiveness of training by looking at increases in self-directed learning experiences.
In closing, there are several approaches to measuring the success of training. The key to success is to decide what your goals are for your organization, in terms of learning and retention, and what measures your organization believe will provide the most comprehensive picture, with the most reasonable lift to achieve the desired outcomes.
Increasing Learner Responsibility
Increasing learner responsibility through a scaffolded framework establishes trust and encourages learners to experiment or try activities on their own.
An instructional model I like to use to increase learner responsibility is the facilitation strategy Gradual Release of Responsibility. This model encourages learners to take control of their own learning by moving learners through a continuum of responsibility;
"I do - You watch"
"I do - You help"
"You do - I help", and
"You do - I watch"
Although often associated with K12 education, this model may be particularly useful for onboarding and everboarding, with modifications to accommodate adult learners and learning online.
As an example, during the first stage, you might listen to a briefing on how to perform a replacement of a hard drive on your computer. This may include a detailed breakdown of the steps. In the second step, you may watch a video providing a detailed explanation by a facilitator on how to replace your current hard drive with a new Solid-State Drive. This could be followed with an opportunity for guided practice while working with a peer to replace the hard drive. The facilitator is handy to answer questions, and provide additional instruction, as needed. Last, you could create a checklist to be shared with peers that illustrates your understanding of the steps to complete the replacement. This final step helps reinforce what you learned and build confidence.
And, you don't necessarily need to start at the beginning of the continuum. You can start anywhere within the framework depending on the needs of your learners.
The key to increasing learner responsibility is to meet them where they are, and provide scaffolding to promote proficiency in their newly acquired skill set.
Promoting Authentic Learning
Promoting authentic learning through increased technology integration, using an eLearning framework, like the SAMR instructional model, offers opportunities for applying real-world experiences.
There is an increased push for online technology assisted learning. SAMR, an acronym for Substitution, Augmentation, Modification, and Redefinition, has roots in K-12 education. This model is easily adapted to adult learning, giving a framework by which to gauge whether we are using the right eLearning tool for the outcome we are trying to achieve.
At the Substitution level, you typically see more traditional activities using a digital delivery method. Examples include reading online and watching self-paced videos.
Augmentation offers learners the opportunity to begin incorporating other forms of technology into their learning. Adding hyperlinks, commenting, creating formative quizzes, and multimedia presentations are all great examples used to enhance learning.
Modification typically includes using a Learning Management System (LMS) to answer questions intended to promote discussion, interaction with the facilitator, working with other learners online which encourages additional dialogue, and tracking completion of modules.
And Redefinition, fundamentally transforms learning, enabling completion of traditional activities that were previously not practical to do online. Examples include practicing and writing your own Hyper Text Markup Language (HTML) or Cascading Style Sheets (CSS) code, using code readers or viewers, and sharing it with other learners via an online learning experience platform.
Promoting authentic learning using the SAMR model is not focused on using the most sophisticated tool, but rather how we can improve learning outcomes, how we can engage and empower learners through technology, and how eLearning can more closely resemble authentic, real-world learning.
Delivering Training Sequentially
Delivering training sequentially helps to increase learning outcomes. An instructional model that uses this step-by-step approach is ROPES. ROPES is an acronym for Review, Overview, Presentation, Exercise, and Summary, which is a methodical framework for teaching a new topic or concept to learners that may be applied to in-person or virtual training.
During the first step, facilitators conduct a review, by tying background knowledge to the new topic they intend to introduce. The overview, describes the topic and highlights the importance of the content. Typically in this step the learning objectives are stated as well. In the presentation step, the facilitator covers the heart of the material, discussing and demonstrating steps, and giving examples. In step 4, Exercise, learners take part in activities that permit them to apply the concept. And in the last step, Summary, the facilitator highlights key points, and reviews the learning objectives.
Delivering training sequentially, using the ROPES facilitator-based instructional model, is a great way to organize the distribution of content, whether in-person or virtual instructor-led training, to help increase learning outcomes.
Instructional Systems Design Models
Creating Effective Training
ADDIE, developed in the 1970s by Florida State University for the military, is an acronym for the five-stage process; Analysis, Design, Development, Implementation, and Evaluation, and is arguably one of the most important models, providing a universal framework for all instructional design. The framework was originally developed to be a guideline for creating effective training and instructional materials.
In this model, often very little time is allocated to the first stage, Analyze. During this stage, learner requirements are detailed. However, this stage may be extended to provide a lens focused on performance criteria related to the business. Business requirements addressed at this stage may include:
Who is your audience?
What are the behavioral changes expected?
What medium will be used for delivery?
What are the eLearning considerations?
What is the project timeline?
The Design and Develop phases may be lumped together. Design focuses on how you will learn, with a priority on the experience. The design is systematic, defining objectives, designing content and exercises, and mining subject matter expert’s knowledge. It includes storyboarding to guide development. Development centers on what you will learn and the creation of materials. Developers bring the storyboard to life, assembling content from the design phase. Testing also takes place at this stage to assure quality.
During Implementation, facilitators are trained in preparation for instructor-led delivery. This is also the step where all supporting elements are established, including handouts, software, equipment, or tools, for example. If the content is an eLearning asset, resources are deployed to the Learning Management System (LMS), or other platform. Implementation may also include a combination of instructor-led delivery or eLearning, offering a blended learning approach.
And the last stage, Evaluate, although intended to be continually monitored throughout the entire process, is focused on assessing the quality and effectiveness of the process. This stage includes both formative and summative evaluation.
Many subsequent models have roots in the ADDIE model, and for this reason, likely ADDIE will always be a tool required in any Learning Experience Designer’s toolkit. The ADDIE model is often represented as a linear approach, typically finishing, and perfecting each stage, making revising content more difficult and giving the impression it is a more time-consuming model than other more flexible models. However in recent years, the linear representation has been replaced by a cyclical approach to offer more flexibility.
Starting with the Desired Results
Backward Design, a fairly new instructional design model, originally published in a 1998 book, Understanding by Design, by Wiggins and McTighe, focuses on starting the process with what learners should know by the end of the learning cycle, rather than traditional models which focus on designing and developing content; instruction, activities, and exercises, first.
This approach suggests that, in phase three, learning experiences should be planned with the final assessment in mind. The difference in the approach to traditional methods is quite significant.
At its core, Backward Design, is quite simple, focusing on three key stages;
Identify learning outcomes
Determine learning assessments
Plan learning experiences
In the first phase, identify learning outcomes, the big ideas, skills, or concepts are determined. The goals identified should be categorized in order of importance. Goals fall into three main priorities;
Concepts worth being familiar with
Concepts that are important to know and do, and
Big ideas worth understanding
Typically a Taxonomy of Learning is applied to writing and establishing goals, which helps to organize learning outcomes from basic to more advanced cognitive priorities. Strategically categorizing priorities is important as likely there will be limits to what learning outcomes may be achieved in one resource, or asset.
Phase two is marked by determining what criteria will be used to evaluate learner's progress. There are a variety of assessment types to choose from, including more traditional evaluations such as knowledge checks, quizzes, or tests.
However, there are also experiential assessments available, like open-ended questions, projects, and portfolios, for example. Each type has its place and should be representative of the level of expected knowledge transfer identified while prioritizing goals during the first phase.
Performance assessments ask learners to apply past experience to a new situation as a means of assessing understanding. In adult learning, this is ideal as it promotes trust and an opportunity for learners to use their past expertise and skill set to show what they already know.
After defining goals, and deciding how learners will be evaluated, in phase three, the design of content is the focus.
In traditional design models, instruction or content is developed before the assessment. With a traditional design approach, the assessment is an after thought, often developed just prior to giving the assessment. The same is often true with the development of eLearning assessments if emphasis isn't placed on its creation as part of an overall process.
Ideally the instructional strategies used to develop content should offer opportunities for learners to explore, gain insight, and new skills through practice. Exercises, activities, and interactions should permit learners to apply newly acquired skills through practice, giving them a chance to master new information.
Often there is one major criticism to Backward Design, specifically concerns with instructing to the test. However, if your goal is to affect behavioral change, whether you are developing eLearning content, or facilitator-led training, then identifying the desired learning outcomes and instructing to them gives you a better chance of achieving your goals.
Arguably instructing to the test itself is not what Wiggins and McTighe had in mind, however, filtering out the noise, or the concepts that are lower priority improves learner's chances of coming away successful.
The Backward Design approach may be used with other models but is readily integrated with more flexible frameworks like SAM or Rapid Prototyping, providing a more robust and agile instructional design process.
Iterating eLearning Deployment
SAM, or Successive Approximation Model, was introduced in 2012 by Dr. Michael Allen. It is an alternative method to the more traditional instructional design process, ADDIE, which uses a more rigid approach to the development of learning content.
Rather than giant, perfectly executed steps, SAM, takes a different approach to the instructional design process, by iterating eLearning deployment. This process enables instructional designers to be more flexible and agile, when deploying new or updated offerings.
SAM consists of three main iterative phases; Preparation, Iterative Design, and Iterative Development. SAM prioritizes developing rough prototypes, offering customers and stakeholders an opportunity to provide feedback early on in the process. As the project progresses, prototypes are polished as each topic is reviewed and approved.
The first phase of SAM, Preparation, is intended to be quick and succinct, rather than starting out with a long, drawn-out analysis. Instead, it focuses on gathering information. SAM utilizes feedback from stakeholders and customers to get started quickly and adjust accordingly in future stages.
Although intended to be conducted quickly, this stage focuses on background information, like collecting learning styles, understanding the intended audience, documenting previous experience or skill sets, and defining project goals, and intended business outcomes.
Phase two, Iterative Design, is marked by the Savvy Start. Often this step is included as part of the first phase, Preparation. Whether included in the first or second phase, the goal is to bring the project team and stakeholders together and serves as the kickoff.
This collaborative session is where the broader team, including Subject Matter Experts (SMEs) and Learning Experience Designers (LXDs) come together and brainstorm ideas. Often rough storyboards, or sketches are a result of this first meeting, considered the Savvy Start.
At this stage, the design begins, followed by repeated prototypes, and reviews of the design before transitioning to the Iterative Development phase. The first draft or prototypes are an approximation of the final resource.
In the final phase, Iterative Development, the design proof is developed, implemented, and evaluated. This design proof represents the project team's consensus and is used to verify the design and functionality of the resource.
The asset moves through various stages, including Alpha, Beta, and Gold versioning before the resource is finally rolled out. This approach limits risk and allows the instructional product creation to move quickly, yet still permit evaluation and review before being deployed.
The alpha release is fully functional. The beta version includes feedback correcting any errors or omissions from the alpha release. And the gold version, is the final release to be deployed at roll out. During each of these versions, the asset is evaluated, and feedback is implemented quickly to achieve the project timeline.
SAM challenges linear approaches, like ADDIE. SAM also addresses obstacles faced during the design process, like quality control, scope creep, timeline constraints, budgets, and managing Subject Matter Experts, by reducing risk, and by employing successive approximations of the final solution. Rather than slow and steady, SAM is agile and built to address the need for performance-driven learning. Ideally this model enables a philosophy of failing fast with early-testing and alternative solutions.
Some Learning Experience Designers are not comfortable with the review process, specifically the number of revisions. However, the beauty of this approach is that it enables designers to gather feedback early and often, which produces a final product that more often than not, is just what the client ordered. Successive approximations allow the client to visualize the final solution and provide feedback from the outset of the project.