PROGRAM DESIGN
Introduction
The following, although not exhaustive, is meant to show the breadth and depth of experience and expertise, as well as my overall approach and thought processes related to how I would approach the development of a Professional Learning program. My goal is to provide a real-world, practical, cost efficient, and customer centered approach to deploying and delivering professional development that enhances learning, increases retention, and improves outcomes.
I started the process by conducting an initial inventory of the current n2y professional learning offerings. Not exhaustive, and based on what I could uncover by mining the n2y digital footprint. This enabled me to view the available resources and content with a broader lens as I developed the program and responded to the various task exercises.
I also made assumptions regarding technology capabilities of n2y which may or may not be available currently. Consider these assumptions to be future state feature recommendations or considerations. I believe in maximizing the available features sets, or leveraging them in ways they had not been originally envisioned, and adding new features based on evaluation and potential Return on Investment.
I did not setup this program up as a one size fits all, or as these are the steps for each area to achieve success, but rather this is one perspective that will help drive initial discussions around my thought processes. This is intended to provide a dialogue of options that would be considered through investigation with various stakeholders.
Inventory
Needs Assessment
I typically prefer a rapid-prototyping approach to the design process. However, I have used a number of methodologies over the years, depending on the requirements and needs of the situation. With that said, conducting a needs assessment is vital to the success of the program. So, whether conducted at the outset, or throughout the design process, this is an essential component of any program.
A needs assessment should be multi-tiered, and focus not only on the wants, or desires of the administration, but on the needs of the learners themselves, ie., educators. To gain buy-in at all levels, and most importantly with adult learners, the program must contain opportunities for learners to take learning into their own hands, to own their learning. For example, this might include the ability for learners to choose their own pathway programs, enabling a common completion of the program or certification by all learners, while providing various avenues for learners to show what they know by completing what interests them or pertains to their specific needs and goals for learning.
The results of the needs assessment will vary from partner to partner. The methods and tools used will be similar and contain core elements, with the ability to refine tools based on the business need. However, there are likely common themes from partner to partner which will enable operational efficiencies and still provide a focus on the unique needs of learners, at scale. The first step, should involve identifying the specific business need of the partner, followed by a detailed gap analysis, assessment of options, and then finally a program recommendation.
The gap analysis should include core surveys, questionnaires, self-assessments, and observations. Self-assessments are critical as they will provide data points for comparison, pre- and post-program completion. This first tier should be automated for efficiency and should fuel the next steps in the analysis. Next steps, should include interviews, and focus groups with a variety of audiences based on the feedback elicited from the surveys, questionnaires, and observations.
Based on the list of needs generated by the gap analysis, an assessment is conducted to determine the best course of action based on the business need identified previously, the total cost of Professional Learning, and the anticipated Return on Investment. This assessment provides a list of options that are then positioned with the partner for consideration.
Key stakeholders should be involved during the entire process. This includes not only the teachers (the learners), administrators, parents, and students (if appropriate) but also the core n2y sales, product, and customer success teams that are managing the account. For example, external focus groups should make up a cross-functional team of stakeholders including administrators, teachers, and parents. It may be beneficial for these stakeholders to be separated into individual focus groups depending on the dynamics of each partner.
Program Design
Generally, the structure of a professional learning program should offer a combination of
self-paced online, asynchronous,
blended (self-paced online, asynchronous, and live online or in-person on-site), and
live online or in-person on-site,
professional learning to meet the diverse needs of learners, school/district size, budgets, and learning preferences. Although the program example below focuses delivery of the program in an online format to provide a budget friendly option for cost-sensitive partners, the modality may be changed to support partners who wish to have blended or In-Person Professional Learning. These additional options would also support a broader set of learning preferences. Programs should also offer self-paced, semi-synchronous learning to enable online cohorts, with a set duration, and with facilitation by assigned coaches to assist discussions, offer guidance, and provide research grounded strategies and insight. Cohorts can be cross-functional within the school/district or multi-school/district for greater diversity. The duration should be 3-6 weeks based on the preference of partners, or if a cross-functional multi-district initiative, based on the goals of the program. This program is not intended to be a one and done professional development program. However, it also is not anticipated to be a year long implementation. It should be finite, yet still be customer-centered and provide opportunities for partners to take their learning into their own hands during the course of the program.
To drive active learning and motivate learners, gamifying learning should accompany all levels of the program, providing opportunities for learners to earn badges and achieve micro-learning credentials at various stages. Micro-credentials should be automated and be transferable. Using Linkedin credentials or the like, is most appropriate and should be transferable from one organization to another, ensuring longevity. The more automated the process, the more consistent the experience. This will increase Customer Experience Scores (CES) --- more on CES in "Evaluation and Feedback". Credentials should include Date of Issue, Expiration date, if applicable, credential ID, credential URL, certificate of completion, and badge. The program should not only offer badging and micro-credentials but should lead to certification, if desirable by partners or learners.
To ensure the program caters to the diverse needs of partners working with students with different disabilities, educators have the option to choose their own pathway, including selecting interactive webinars that pertain to their focus area and goals for learning during the program. Two additional topics have been added to the existing portfolio of interactive webinars, including Kinesthetic Learning with Smart Boards, and Promoting Interactive, Systematic, and Active Learning with the Cornell System. These two sample topics have been added to address the growing implementation of front of the classroom interactive displays in schools and to offer an alternative option for notetaking to address traditional approaches used in the classroom and promote a more inclusive classroom. Topics should include solution specific learning, demonstration of effective use of various instructional strategies that promote inclusion, and holistic approaches to incorporate supporting solutions which may be present in partner schools.
The use of journaling is intended to help drive retention of learning, activate collaborative engagement, provide more dynamic discussion and insights, and model additional instructional strategies among learning partners. By leveraging excitEd and interactive webinars as part of a holistic approach, viewership will grow, provide learners with additional insights related to solutions, drive additional opportunities for learning, and leverage a broader set of experience and expertise across the n2y organization to enhance learning.
Demand forecasting should include sound predictions based on quantitative historical data, and the anticipated seasonality fluctuations expected at peak times of the school year. The forecast should also include qualitative data, like the anticipated deployment of a new solution, a new professional learning bundle being offered, etc. The focus of this program, requires predominantly online resources, versus onsite resources, in some ways simplifying demand forecasting and capacity planning. However, should onsite delivery be requested this would require additional resources, possibly at peak times. Should it be beyond the capacity of the team, contract consultants should be utilized.
For capacity planning, I prefer to use simple visual tools like Asana, Jira, monday.com, MS Planner, or Trello. The basic steps needed to effectively and efficiently project staffing levels, is to estimate anticipated demand (what tasks need to be conducted), determine required capacity (hours), calculate the team's availability (hours), measure the capacity gap, and align capacity with demand. Capacity planning can be more predictive with an estimate tool that identifies the number of hours to complete repetitive tasks. For example,
Deliver Interactive Webinars (2 hours)
Prep for Interactive Webinars (1 hours)
Create Interactive Webinar content (40 hours)
Coach partner cohorts (4 hours)
Research education insights (20 hours)
Develop implementation video (40 hours)
Design interactive course (200 hours)
There are industry standard averages that may be leveraged to estimate some tasks, while others may require honing over time to document the average number of hours to complete the task. An example of an industry standard average for creating a training video is for every 1 minute of polished video it takes 40 hours of research, development, and editing to produce the final product. With these estimates you can then begin to measure the gap or excess and make appropriate project, program, and portfolio adjustments to better anticipate lead and lag capacity of the program.
Structure
The structure of the current n2y Professional Learning offerings is represented in the table with additional assumptions reflecting my recommendations. The concept of more is less and less is more applies. The intent should not necessarily be to add more offerings. More offerings than needed tends to cause CSAT, NPS, and CES scores (more on these scores later) to plummet, rather to develop new hybrid programs that bridge the current portfolio and add more depth and breadth to the offerings. This might include cohorts, community ambassador programs, etc. However, this does not preclude creating new content within the programs, new approaches, or new interactive webinar topics to meet the needs of partners. The overall structure of the program would bridge Smart Start, Essentials, and Essentials Pro building on one another, and preparing partners to be trainers or coordinators and community advocates or ambassadors of the n2y solutions.
Program
Interactive Webinars
Instructional Strategies
The instructional strategies I leverage to deliver effective professional learning sessions vary widely, and are dependent on the specific goals and objectives established through partnership with customers. Promoting active learning to engage adult learners in education often comes down to showcasing techniques that they can in turn use in their own classroom with their learners. Sharing data points or metrics while conducting an activity also helps to affirm the quality of the specific strategy. For example, I might employ spaced repetition, active recall, mind mapping, or interleaved learning throughout a session, whether online, blended, or in-person to study, pause, and repeat, test frequently, create visual connections, and mix it up to potentially improve retention rates by as much as 50%.
Every facilitator has their standout instructional methods that help them deliver high-quality, engaging learning to audiences. Often these instructional strategies help facilitators and Learning Experience Designers organize content to enhance the overall delivery. Typically, these models are used to augment different components of the design process, including how you will measure training effectiveness, or how you will deliver facilitator-led training, for example.
Although, not a specific learning methodology (at least that I know of) and one of the most important strategies I like to employ during any session, is the art of remembering everyone's name in the first 10-15 minutes of a training. Definitely not an easy task, but in an in-person session, I try to meet all learners at the door where I introduce myself and hopefully they willingly introduce themselves too. If not, I elicit their name by directly asking. Then I repeat their name back to them and saying something like nice to meet you. With a brief chat and then repeating their name one more time before they move on to their seat, or coffee, or whatever might be next. This is repeated 30-40 times as each person enters the room. At the outset of the session, of course, I make introductions to the group, speak briefly about my background, and then move a little closer to the group where I say, in order throughout the room, I am super excited to have each of you join us today ... John, Mary, Lorenzo, Paul, Jake, Dawn, Jeni, etc. If I get hung up I say, "I will be back, please don't tell me your name." I finish the room, and then come back to anyone I miss. This pause allows my brain to process the names, and then I apologize and say, thank you Jerry, it is a pleasure to have you join us as well. This one act, shows that they are important to me as learners, and that I truly want the group to participate and be active learners in the session. Then throughout the sessions, of course I address each learner by their name which adds a level of trust that allows each learner to be at their best, and ask those hard questions, and believe that I will help find those answers through a day of facilitation.
I'd like to dive into a few standout instructional models that I use regularly. Each play a vital role in the development and facilitation of learning and development, whether used online, blended, or in-person.
Increasing Learner Responsibility
Increasing learner responsibility through a scaffolded framework establishes trust and encourages learners to experiment or try activities on their own.
An instructional model I like to use to increase learner responsibility is the facilitation strategy Gradual Release of Responsibility. This model encourages learners to take control of their own learning by moving learners through a continuum of responsibility;
"I do - You watch"
"I do - You help"
"You do - I help", and
"You do - I watch"
As an example, during the first stage, you might listen to a briefing on how to perform a replacement of a hard drive on your computer. This may include a detailed breakdown of the steps. In the second step, you may watch a video providing a detailed explanation by a facilitator on how to replace your current hard drive with a new Solid-State Drive. This could be followed with an opportunity for guided practice while working with a peer to replace the hard drive. The facilitator is handy to answer questions, and provide additional instruction, as needed. Last, you could create a checklist to be shared with peers that illustrates your understanding of the steps to complete the replacement. This final step helps reinforce what you learned and build confidence.
And, you don't necessarily need to start at the beginning of the continuum. You can start anywhere within the framework depending on the needs of your learners.
The key to increasing learner responsibility is to meet them where they are, and provide scaffolding to promote proficiency in their newly acquired skill set.
Promoting Authentic Learning
Promoting authentic learning through increased technology integration, using an eLearning framework, like the SAMR instructional model, offers opportunities for applying real-world experiences.
There is an increased push for online technology assisted learning. SAMR, an acronym for Substitution, Augmentation, Modification, and Redefinition, has its roots in K-12 education. This model is easily adapted to adult learning, giving a framework by which to gauge whether we are using the right eLearning tool for the outcome we are trying to achieve.
At the Substitution level, you typically see more traditional activities using a digital delivery method. Examples include reading online and watching self-paced videos.
Augmentation offers learners the opportunity to begin incorporating other forms of technology into their learning. Adding hyperlinks, commenting, creating formative quizzes, and multimedia presentations are all great examples used to enhance learning.
Modification typically includes using a Learning Management System (LMS) to answer questions intended to promote discussion, interaction with the facilitator, working with other learners online which encourages additional dialogue, and tracking completion of modules.
And Redefinition, fundamentally transforms learning, enabling completion of traditional activities that were previously not practical to do online. Examples include practicing and writing your own Hyper Text Markup Language (HTML) or Cascading Style Sheets (CSS) code, using code readers or viewers, and sharing it with other learners via an online learning experience platform.
Promoting authentic learning using the SAMR model is not focused on using the most sophisticated tool, but rather how we can improve learning outcomes, how we can engage and empower learners through technology, and how eLearning can more closely resemble authentic, real-world learning.
Delivery Training Sequentially
Delivering training sequentially helps to increase learning outcomes. An instructional model that uses this step-by-step approach is ROPES. ROPES is an acronym for Review, Overview, Presentation, Exercise, and Summary, which is a methodical framework for teaching a new topic or concept to learners that may be applied to in-person or virtual training.
During the first step, facilitators conduct a review, by tying background knowledge to the new topic they intend to introduce. The overview, describes the topic and highlights the importance of the content. Typically in this step, the learning objectives are stated as well. In the presentation step, the facilitator covers the heart of the material, discussing and demonstrating steps, and giving examples. In step 4, Exercise, learners take part in activities that permit them to apply the concept. And in the last step, Summary, the facilitator highlights key points, and reviews the learning objectives.
Delivering training sequentially, using the ROPES facilitator-based instructional model, is a great way to organize the distribution of content, whether in-person or virtual instructor-led training, to help increase learning outcomes.
Summary
Each of these strategies may be used in various capacities to support learning and development. For example, these instructional models can be used to encourage learners to take charge of their own learning, help to promote more authentic, real world learning, and increase retention in instructor-led delivery of learning, which are all critical components for the successful deployment and subsequent acquisition of knowledge by learners.
Collaboration and Support
Research shows that peer-to-peer learning is a powerful tool to enhance the learning experience and increase retention of new learning. Fostering collaboration and support among educators participating in professional learning programs will vary from partner to partner. So, one size does not fit all, but an arsenal of tools at your disposal is the key to a successful implementation. Often in a school setting, especially in such a tight knit group, like special education, team teaching is a common practice, and a means for providing a more inclusive approach while still providing individualized instruction. This may include not only teachers, but coordinators, and education technicians.
There are a number of strategies to consider, which would encourage peer-to-peer learning, reflection, and sharing of best practices. With learning platforms becoming more common in every school, as well as the delivery mechanism for learner (student) content, its use and implementation is a must and should be at the core of all interaction. This is where micro-credentialing and badging are important components to leverage as well, as it will drive increased usage and retention. Not that all interactions should be online, but that the collection of communication, content, and data should be centrally located and shared, for all learners to mine.
Peer learning in practice, among others, should be multi-tiered as well and include Train-the-Trainer programs, Learning Communities, collaborative projects, and cohorts. Each of these can be powerful in their own way and have the ability to be centrally tied to one learning platform. Single-sign-on (SSO) using a federated sign-in from either a common platform like Gmail or Microsoft, or a partner's school email (which often is Gmail or Microsoft) is ideal. This enables ease of use, and ensures CES (Customer Effort Scores) are kept low. More on this in "Evaluation and Feedback".
Each program has its own merit and should be leveraged based on the needs of the partner and the business outcomes. Train-the-Trainer models have existed for a number of years. In some cases, this model is dated, requiring a new lens to be applied. Often its the requirements put on becoming a trainer, rather than the actual goal or outcome that is being achieved that is the issue. At the core, when a trainer has "mastered" the content and then returns to the district or school, the trainer becomes the expert with the objective that other learners seek out the trainer for coaching. Maine has an established 1:1 program that has existed for more than 20 years now. Its train the trainer program did not label experts as trainers but rather coordinators. These coordinators attended a number of professional learning opportunities that focused on collaborative, pick your own path, projects, and cohorts. These MLTI (Maine Learning Technology Initiative) coordinators became the core experts that all learners came to, whether student, teacher, technician, or administrator. So, not all Train-the-Trainer models need to look the same, nor have the same objectives.
Learning Communities should drive discussion and collaboration and should offer opportunities for celebration through various rewards. It might include opportunities for earning points, becoming the recognized expert, badging, or cohort gatherings. The sky is the limit but should cater to the audience and the needs of the partner.
Collaborative projects can have disadvantages, including unequal contributions, the quality of interactions, and level of comfort some learners have in this type of setting. However, typically the advantages far outweigh the disadvantages including improved retention, increased motivation and engagement, diverse perspectives, improved communication, and the development of lifelong partnerships. Collaborative projects should offer choose your own path options for learning and for sharing what you know or have learned.
Last, cohorts can enable peer learning across schools, districts, cross-functionally with various faculty, teachers, technicians, and administrators, or even cross-functional support with very diverse groups, including learners in other parts of the country or globally. The more diverse the cohort, the more likely perspectives will vary and provide greater insight and learning from each other. Some learners may find solace and comfort in a less diverse group. However, perspectives may not be as diverse. The use of a learning platform for blended delivery of content and communication may help these learners and provide a broader perspective.
Evaluation and Feedback
Measuring the effectiveness of professional learning programs, not too dissimilar to conducting a needs assessment, should involve a multi-level approach. There are a number of models for measuring the success of training effectiveness used throughout the Learning and Development industry including the Phillips ROI Model, Kaufman’s Five Levels of Evaluation, Anderson’s Model of Learning Evaluation, and Summative vs. Formative Evaluation, to name a few. The model itself is not the key to the success of measuring training effectiveness but rather employing a multi-level approach. For the purposes of illustrating a multi-tiered approach, I will leverage the Kirkpatrick's Four Level Training Evaluation Model. Reaction, Learning, Behavior, and Results make up the four levels.
Although there are four levels to this model, implementing at all levels often is viewed as not practical or cost effective. Most companies reserve Level 3 and 4, Behavior, and Results respectively, for large company wide initiatives, while choosing to implement Key Performance Indicators (KPIs) associated with levels 1 and 2, Reaction and Learning, alternatively. However, Levels 3 and 4, should be employed in some capacity, whatever appropriately meets the needs of the partner to assure the most complete view of training effectiveness. Collection of Behavior and Results data may be achieved by employing new more sophisticated and automated avenues. More on this later. A balance of quantitative and qualitative metrics at Levels 1 and 2 are essential to mine Voice of the Customer (VoC) insight, whether from internal learners or external customers.
At the reaction level (1), data points should include participation rates, completion rates, satisfaction rates (CSAT), and Net Promoter Scores (NPS), for example. Typically, NPS is reserved for a longer-term view of training, while more often CSAT is used at the micro level to assess training interactions. A qualitative question at level 1 might be, "What topic/section did you find the most valuable?" This provides invaluable insight into the learners' reactions but also provides an opportunity for learners to reflect on what they have learned. At Level 2, Learning, data points should include pre- and post-assessment scores, formative quizzes, and pass rates, for example. Pre-assessments offer an opportunity for learners to test out of content they already know, while also providing stakeholders a glimpse at what learners have learned post-training by comparing pre-assessment scores to post-training assessment scores. A qualitative question at this level might be, "What did you learn from your training to help you perform at a higher level in your role?"
Metrics to show training effectiveness should also include a much wider array of data points, including for example, likes/dislikes, engagement rates; time spent learning, learner drop off rates, and badges earned through gamifying learning. Typically, these data points are housed within the Learning Management System (LMS) and ultimately provide additional context to stakeholders. Although not tied directly to training effectiveness, a Customer Effort Score (CES) should also be employed to target the ease with which trainees are able to learn. For example, points are awarded for the ease of signing on to the LMS, navigating assigned programs, and courses, accessing additional supporting resources, completing the overall training pathway, or asking questions and receiving answers from facilitators. The more difficult professional learning is to complete, the lower the success rates, and effectiveness of the training.
Levels 3 and 4, of the Kirkpatrick's model, although more time consuming to document and collect for further analyzing, are a major component of understanding overall training effectiveness. Level 3, Behavior, requires observations and reporting by administrators or department/chair leaders. Observers will report the degree to which a new skill has been applied as a result of the training. This is not overly difficult but does require an avenue for observers to quickly report their findings and they must be present to make the observation. Reporting can be done through a simple follow up survey conducted at a specific interval after the completion of professional learning. Level 4, Results, are the most difficult to record or analyze, especially at the district, school, classroom, or unique learning needs level. A true measure would include an evaluation of standardized student assessment scores. This is definitely a delicate subject but is an avenue for understanding the effectiveness of training. As an alternative, during the reporting of observations at level 3, the degree to which a student's productivity or learning has increased due to the implementation of the new skill may be recorded using the same survey tool.
Today's modern learner, learns on the fly, anywhere, anytime. Research shows learning rates as high as 56% at the point of need (mobile), 48% on evenings and weekends, 41% at their desks, 30% during breaks and at lunch, and 28% on the way to or from work. This poses a tracking challenge but if captured provides a more fluid image of the overall success of a program initiative. With the advent of improved standard automated data collection, alternative methods for collecting data for Levels 3 and 4 are now possible, or automations might be used in conjunction with all levels to provide a more complete view of overall training effectiveness. The addition of a Learner Record Store (LRS) to collect various experiences using Experience Application Program Interface (xAPI) data points can provide a much broader view of all learning. This additional metric collects data on a wide range of experiences that a learner has online. Activities are recorded using simple secure statements in the form of "Noun, verb, object" or "I did this." These additional data points can also help measure the overall effectiveness of training by looking at increases in self-directed learning experiences. However, the addition of an LRS requires a capital investment as well as ongoing supporting capital but can be invaluable to providing a complete picture of training effectiveness.
The feedback provided during the various stages of evaluation are invaluable to the ongoing success of the program. Feedback will be integrated, as appropriate, when the lifecycle of the program is being updated. Critical errors or omissions should be addressed immediately, not at the next update. Updates in some cases may be completed on the fly as they don't adversely affect the operational efficiency. However, the customer's insight is critical and should be implemented as soon as it makes sense. This increases the overall effectiveness of the professional learning and shows the customer that their feedback matters.
There are several approaches to measuring the success of training. The key to success is to decide what your goals are for your partners, in terms of learning and retention, and what measures your partners believe will provide the most comprehensive picture, with the most reasonable lift to achieve the desired outcomes.