Challenge #6: Analyze the Characteristics of Existing and Emerging Technologies and Their Potential Use
Artifact: Scorecard Matrix
Reflection:
I worked for an organization, which was a large distributor of HVAC, plumbing and industrial products as a Learning and Development Specialist. When I joined the organization, one of my primary projects was to investigate and later oversee the LMS integration of an online learning provider. The online learning that would be provided by the vendor would be incorporated into the company’s LMS where employees could go in and complete courses for professional development. An online learning provider is an organization that offers educational content in a cloud-based platform. Some examples of online learning providers include LinkedIn Learning, OpenSesame, Udemy, and Coursera, Skillsoft Percipio etc. In addition to the exchange of many emails and calls with the various vendors, I developed a scorecard matrix to aid my leader and director’s decision-making process.
What is a Scorecard Matrix?
A scorecard matrix can be used in various ways; I developed mine in excel. The way in which I used the scorecard was to assess the online learning providers on a series of determined criteria (the “wish list”) such as cost, SSO integration, LMS pairing capabilities, language options etc. As I received information from the vendors on the questions I had about their platform and as their platform satisfied the needs of my organization, they received a score from 1-3 (1 = does not meet needs, 2 = partially meets needs, 3= meets needs). Additionally, each criterion also was associated with an area weight, which I expressed as a percentage. The higher the percentage, the more significant weight of that criterion. At the end, each vendor received a total score, which I used as a basis for my recommendation to my leader.
Scorecard Matrix Criteria
My scorecard matrix included the following criterion:
Language options: The company is a Canadian company with an office in Quebec, which required translation of all training, emails, or videos that were created in English to also be produced in the French-Canadian language. Therefore, the vendor was assessed on their ability to cater to the company’s varying language needs to ensure that the courses available in the English language were also available in French-Canadian.
Single-Sign-On (SSO) Integration: Like many companies, the company used SSO to enhance user experience as well as maintain security. This criterion focused on understanding the following:
How easy is it to enable SSO?
Is SSO enablement included in the integration or is this an additional cost?
Integration with Human Resources Information System (HRIS): This criterion focused on how compatible the HRIS/LMS used at the company was with all the online learning platforms. The idea here was to ensure that employees could access the learning content from a single source of truth (HRIS) as opposed to accessing multiple websites for different things.
Cost: The company was working with a set budget, so I needed to ensure that the learning technology fell within the organization’s needs. The vendor was assessed on their competitive pricing.
How I Determined the Benefits and Limitations of Each Learning Technology
Developing the scorecard was the final step. Prior to developing it, I facilitated my own research of the following online learning providers:
A great place to start my research was the websites of each of these learning providers, which helped me learn more about the platform, initial pricing or plan information, and explore available courses or catalogues to understand if what they offered aligned with the organization’s needs. Once I had foundational knowledge about the online learning provider, then I reached out to a primary contact, who was able to schedule a meeting with an assigned Customer Relationship Manager (CSM). Especially the meetings with the CSMs are what informed me of the benefits and limitations of the learning technologies. I leveraged email and over a series of calls and questions & answers, I was able to develop my scorecard and present my findings and final recommendations to my manager and to the director of Learning and Talent development.
Challenge #7: Use Appropriate Techniques to Analyze Various Types and Sources to Validate Content
Artifact: EDCI 531 Research Paper
Reflection:
As part of my final assignment requirement for EDCI 531, I submitted a paper on the topic of Learning Theories along with my job aid. To write my research-based paper I had to use various validation techniques to ensure that the sources I was using were strong sources and authentic.
Research Validation Techniques
Author’s Credibility
In this reflection I will share what research validation techniques I use related to evaluating the credibility of sources and authors when writing academic or professional papers. First it is very important to understand the author’s background and expertise. What is their educational background? What is their educational background? What is their expertise in the field? For example, an article by a professor in a peer-reviewed journal is more reliable than one by an anonymous blogger or an article on Wikipedia.
To validate this information, if information about the author isn’t already readily available through the website, I would google their name and the site on which their article has been published. In my paper “Learning Theories”, I cited Saul McLeod, PHD and his article, “Behaviorism in Psychology”. At first glance, the article was published on a website called “Simply Psychology” which I had never heard of. However, after researching McLeod I learned that he has a BSc in Honors Psychology, a Master of Research in Experimental Psychology with Data Science, from the University of Manchester, and a PhD in Psychology from the University of Manchester. Already I know he has an impressive and credible educational background. Moreover, he had referenced over 15 publications and many of his citations are journals, which adds to the article’s credibility.
Publication Source
Next, I review the publication source. Is thearticle published in a reputable journal, a media outlet, or academic press? For example, Forbes or The New York Times, are held to higher editorial standards than Wikipedia or personal blogs. The way to validate the publication is by reviewing of the source is peer-reviewed or editorially reviewed. Simply Psychology, which is the website where I found McLeod’s article, has an about section, which further validated the resource. Upon researching, I learned that the website is aimed as psychology students with the purpose of writing informative articles in an academic style. Like I mentioned earlier, each topic article also provides links to relevant journals and academic studies, related research and topics, to provide a wealth of additional information to aid learning. Simply Psychology also has a “Why you should trust us” section on the website essentially explaining the credibility and trustworthiness of their content.
Citation Metrics
Finally, it is crucial to check how often the author or article has been cited by other credible sources as high citation counts suggest influence and recognition in the field. On Simply Psychology’s website, it is highlighted that the website has roughly four million monthly visitors worldwide and it has been linked to by thousands of schools and universities, professional associations and research organizations, reference sources and other information authorities, newspapers, magazines and other news services. To name a few:
Berkeley University of California
Harvard University
Cambridge University Press
University of Toronto
London School of Economics
University of Manchester
The Guardian Newspaper
Through this challenge, I realize even more that is it essential to identify the credibility of sources in research. Resource credibility is essential for maintaining academic integrity, supporting valid conclusions, and building trust in the accuracy and reliability of the information presented.
My artifact for this challenge is my EDCI 528 performance assessment that covers my analysis for Company X – an edtech start-up company. Gap Analysis A gap analysis is an approach used to identify the difference between the current state of a performance and the desired future state. There are three key components of a gap analysis: The current state, which recognizes what is happening in present time; the future state, which identifies where an organization wants to be; and the performance gap, which essentially compares the current state to the future state. Identifying the Learning or Performance Problem As part of the performance assessment, I developed The Human Performance System Diagram, to complete an environmental and system analysis. This helped me identify the external environment and internal environment factors outside and inside the organization that support or hinder employee learning and customer satisfaction. To identify the learning problem, I:
Defined what the current state performance was with regards to the two goals I highlight in my performance assessment - Ineffective employee training and low customer satisfaction scores. This would be done through conducting observations, training workflow analysis, assessing employee engagement survey results, assessing employee training survey results, assessing customer satisfaction surveys etc.
Defined what the future state should be - This would include setting goals and benchmarks for performance as well developing an overall strategic plan, which I have done in the performance assessment by showcasing the interventions I would take.
Assessed the potential root cause(s) - This is done through developing the Human Performance System Diagramas well as through facilitating focus group discussions with the employees to identify the gaps in the existing company culture, organizational processes, structure and training processes.
Demonstrating Competence In developing my performance assessment report, I demonstrated competence across several key performance statements by applying structured methodologies and solution-oriented thinking. To identify and analyze performance gaps, I used the Human Performance System Diagram, which helped me map out the external and internal environmental and factors influencing decisions and performance. Additionally, my report identifies targeted interventions designed to address the identified learning and performance gaps. These include strategies such as conduct root cause analysis and streamline content, shift to a blended learning approach with scalable digital resources, and integrate post-training performance support & feedback loops to closely align with organizational goals. This demonstrates my ability not only to assess performance challenges but also to propose actionable, evidence-based solutions.
Prior Experience and Future Growth As a former L&D Specialist, I regularly conducted gap analyses or also referred to as “needs analysis” to identify learner knowledge and skill gaps, which informed the design of targeted learning solutions. Using the ADDIE framework, I developed and delivered blended trainings that directly addressed those gaps, driving measurable improvements in employee performance. During my transition into an Instructional Designer role within a tech/software company, serving the sales organization, this experience positions me to continue to assess learner needs and identify product knowledge gaps for the projects I am involved in allowing me to develop scalable and meaningful learning experiences that align with agile and fast-paced development cycles.
When I worked for a start-up company, I was tasked to build a New Manager Onboarding Program for new people leaders joining the company and newly promoted individuals from an individual contributor to people leader role. The purpose of this program was to provide training and resources so they could better support their direct reports especially in situations where individual contributors who had no prior leadership experience were promoted into leadership positions. Before designing a manager new hire onboarding program, I conducted a focus group interview as my primary data collection method.
Data and Data Collection Method
My method of data collection was several virtual focus group interviews with groups of 5-10 people at a time: sample size of 30. I did not conduct any formal surveys to avoid survey fatigue but more importantly I wanted authentic answers and create a forum for leaders and individual contributors to share their personal experiences and expectations, which can be difficult to capture in a limited question survey.
How I Determined my Target Population
I engaged tenured managers who had not received formal onboarding, as well as individual contributors who regularly interacted with managers, to gather insights into knowledge and skill gaps. Through these discussions, I identified key characteristics of the target population, including varying degrees of leadership experience, diverse and global team structures resulting in varying team and individual needs across multiple time zones, and differing expectations around managerial performance. Individual contributors emphasized gaps in communication and lack of coaching, while tenured managers highlighted the lack of resources when they get asked questions and knowledge available to navigate basic tools like the HRIS to further support their teams. These insights had a direct impact on my design decisions, leading to the development of a program that responded to the needs of both groups. The final design included two key components: targeted soft skill development and structured training on essential resources and tools.
Reflection of Previous Experiences
Like I shared in my reflection, I have had experience with collecting data through focus groups, which is another type of surveying method. As an instructional designer, I can continue refining my surveying skills by incorporating more structured frameworks, expanding my use of surveys and performance metrics. This will allow me to design programs that are not only learner-centered but also measurable and strategically impactful.
In EDCI 577 I completed a brief proposal to share with Lucy from jetBlue, a case study student were asked to evaluate and then provide recommendations. Students were asked to review the case study from a consultant’s perspective offering Lucy recommendations to develop a program evaluation of the POL training.
Process for Determining Subordinate and Pre-Req Skills/Knowledge for an Audience
To determine subordinate and prerequisite skills and knowledge required for the jetBlue POL training audience, I emphasize the importance of goal analysis, learner analysis, data analysis, and evaluation analysis.
Goal Analysis:
This is twofold:
What are your goals for the training program and what do you hope to achieve through the program? How do these goals align with the strategic imperatives of the organization?
Clear learning outcomes need to be identified as well – what should the learners be able to accomplish after they complete the training?
This helps define the desired outcomes and performance expectations. Breaking down these goals allow IDs to pinpoint the subordinate skills or competencies that support each learning objective.
Learner Analysis:
Conduct a learner analysis to understand the POL trainees’ prior experience, current competencies, and learning preferences
Who the learners are (background, roles, experience)
What they already know (prior knowledge)
What gaps exist (learning needs)
How they prefer to learn (modalities, constraints)
This helps identify gaps between what learners already know and what they need to know, revealing the prerequisite knowledge and skills necessary for successful participation in the training.
Data Analysis:
Conduct “data mining” by reviewing all the feedback and determine if there are any patterns or trends on what was shared by all stakeholder groups.
Map the stakeholder feedback against your list of evaluation criterion – are there any alignments?
Begin thinking about next steps or possible solutions. Do you have everything you need to make an informed recommendation, or do you need to have a secondary call with each or a select stakeholder group?
Support decision-making e.g., if the program needs to be changed in the future with results from the data analysis.
For this analysis, it is critical to review available performance data, feedback from previous training sessions, and any relevant metrics. This provides insight into common areas of struggle or success, helping validate which skills are truly foundational versus those that could be developed during the training itself.
Evaluation Analysis:
Determine success criteria.
What does success look like?
Determine what evaluation model to use e.g., Kirkpatrick
Develop a criterion matrix based on what you know about the training program and the organizational goals/objectives.
Completing this task first will also ensure that the input is unbiased as the next stage is to collect feedback and discuss with stakeholders.
Consider how the training would be assessed and what criteria would be used to measure success. This helps to ensure that the identified prerequisite and subordinate skills aligned with both learning outcomes and evaluation benchmarks.
Reflection of Previous Experiences In previous roles, I’ve regularly conducted goal analysis to align training objectives with organizational outcomes, and used learner analysis to assess audience backgrounds, skill levels, and learning preferences to tailor instruction effectively. I’ve also applied data analysis to identify performance trends and inform instructional improvements. However, I’ve had limited direct experience with full-scale evaluation analysis, and this has largely been the case due to organizational priorities being focused more on content development and delivery than formal program evaluation. As an instructional designer, I see evaluation analysis as a valuable opportunity to strengthen the impact of training by connecting learning outcomes to measurable performance results.