Ideas on How to Evaluate Your Training Programs

Using Learning-Transfer-Evaluation Model (LTEM)

The Problem
Reports show that training evaluation isn’t effectively carried out in most companies. Several reasons have been attributed to it:
1) lack of resources
2) inadequate evaluation skills
3) time constraint
4) lack of management support
5) not using appropriate assessments to measure learning
Read More…

How to Increase Training Transfer to the Workplace

Research shows a variety of factors leading to most of learning being wasted. These factors include poor learning design and delivery, inadequate or no training evaluation, personal characteristics (such as motivation, confidence, or self-efficacy), and work environment (i.e., supervisor support and peer mentors). This article draws on existing research and my doctoral dissertation to outline a practical learning blueprint on increasing training transfer. You can use this learning blueprint to enhance your training practices to maximize training transfer. Read More…

A Brief History of Human Learning & How it All Started

“History is for human self-knowledge. The only clue to what man can do is what man has done. The value of history, then, is that it teaches us what man has done and thus what man is.” ~R. G, Collingwood

Throughout centuries, humans have always wanted to learn about the world, and how we think and behave. The efforts of all the early philosophers, psychologists, and scientists have resulted in significant progress in how we learn. Let’s see what we learn from the history of human learning. Read More…

Some Journals for L&D professionals

In a recent L&D 2022 booklet compiled by an esteemed fellow learning professional, I’d highlighted the importance of using evidence-based practices and access to some journals with rigor. Although it occurred to me to add a list, I chose not to at the time, since I didn’t want to miss out on many others that I may miss from the list – inadvertently portraying only some as the good ones.

Some fellow learning professionals have reached out to me for some recommended journals. Here’s a list that I’d like to share. This is list is not exhaustive (please feel free to add):
Read More…

How to Tell the Difference between Good and Poor Research?

Applying research to practice is an integral part of a learning professional. However, there is a lag in this for a variety of reasons. To use evidence-informed learning design we need to be able to be aware of our own confirmation biases, and then recognize those in other researchers and stakeholders. We need to be aware that any recommendation with citation does not lend itself to accuracy and efficiency in performance outcomes. In this article, I briefly tap on the issue of distinguishing between good research and poor research. Because…

Not all science is ‘good science’!

To tell the difference between good and poor research, you should be able to identify the credibility of a paper. Below, I share some tips to evaluate a research paper (an empirical study1) and welcome others’ ideas as well.
1. Start with the Abstract: See if it’s clear to understand, has no ambiguity, and describes key research elements, such as research design, participants, and findings or scope very well. See an example of a good abstract here.
2. Goal of the study: Is the purpose of the study clearly stated at the beginning? This will give you a clear understanding of the focus of the paper and the gap it intends to bridge through the study. A reader needs to know why they should read it, how it benefits them, and how coherent the paper is.
3. Check the Methods: See if authors have clearly described the research design (such as participants, approach, how samples were recruited or randomly assigned, what were they tasked to do, and how data were collected) See an excerpt of the Method section below:

4. Research Questions/Hypotheses: Are questions/hypotheses clearly stated and well-formed to be specific enough? Does the paper even have research questions or hypotheses that are linked to previous research? Using very general questions that are hard to measure would lead to bias as well. Check out the following examples:

Hypotheses in one study: According to the previous studies, we hypothesized that: (a) the psychological well-being of mothers of normal-functioning children is higher than that of mothers of autistic and blind children, and (b) mothers of blind and autistic children are different in terms of their psychological well-being.

Hypothesis in another study: Based on this analysis, the proposed research broadly predicts that students who view an instructor draw diagrams during a concurrent oral explanation will perform better on a transfer test than students who view the equivalent static (i.e., already-drawn) diagrams while listening to the same oral explanation.

Research Questions in one study:
When solicited during an interview, are eight graders able to express epistemic reflection along the four dimensions identified in the literature?
Are patterns of epistemic metacognition identifiable when students’ answers are clustered for their levels of sophistication?
Is epistemic metacognition related to individual differences such as prior knowledge, study approach, and domain-specific beliefs about science?
Is learning online information from multiple sources influenced by epistemic metacognition in context and the individual differences examined?

Research Questions in another study:
Which level(s) of learner factors – epistemology (epistemological beliefs), attitudes (attitudes toward technology use), and strategies (approach to learning – deep learning and surface learning) influence higher-order thinking?
How do these factors directly or indirectly affect higher-order thinking?

5. Tally the Sample Size and Results to detect bias: Sampling error is a common issue that can lead to bias. Do authors clearly explain how they sampled a population? Also, how large a sample size is? A sample should represent a population. Read here to learn more about sampling errors.

6. Check out the Results: Do you easily understand what the authors are reporting as their findings? Are data and numbers consistent throughout the paper? Do you see complicated equations or confusing figures/tables/graphs? Are they using too many arcane technical terms that are hard to understand (poor writing style)? Did they report the effect size2?

While some knowledge of statistics would be good to verify statistical reports and appropriate statistical analysis, you won’t need to fully rely on this on your own (the reviewers of a rigorous journal would notice it). Afterall, a poor paper will avoid reporting statistical reports or uses very complicated and misleading ones (and yet, they do get published!). See an excerpt of a good Results sections (d refers to effect size):

7. Look for sweeping claims in Discussion: These might give you a hint that there has been confirmation bias and authors might have easily generalized their findings according to what they’ve been aiming to find.

I admit that the above tips are not exhaustive and I would recommend reading as many papers as possible and comparing them with each other. Here are some tips to identify a poor journal article.

Some More Examples
There are more examples of good abstracts https://psycnet.apa.org/record/2018-58542-001

Final Thoughts
Lastly, we need to admit that one of the major challenges of most learning professionals is a lack of access to online databases and journal articles. Unfortunately, many organizations do not appreciate the fact that learning professionals should have access to updated research. It would be great if companies could sign up to a few journal articles that have rigor and make them available to their L&D staff.

Book Recommendations
Here are couple of suggested good reads if you are interested to learn more about biases and judgments:
1. “When can you trust an expert” by Daniel Willingham. I wrote an overview of it a few years ago, but it’s worth reading the book on your own. In this book, Willingham points how persuaders might use “research” to sell their ideas and how our judgments are influenced by different factors.
2. “The structure of scientific revolutions” by Thomas Kuhn. In this book, Kuhn highlights what ‘normal science’ is and how it can be affected by people’s views (or biases) and go off-track. He introduces ‘revolutionary science’ which leads to paradigm shifts and changes in the direction of scientific research.
3. “Thinking fast and slow” by Daniel Kahneman. Kahneman introduces the two systems in our brain which influence our decision-making and judgments. System 1 thinking is intuitive, operating automatically with no sense of voluntary control, and System 2 thinking is complex, relating to our conscious, attentive, and reasoning side.
4. “Noise” by Daniel Kahneman. Kahneman makes a distinction between ‘noise’ and ‘bias’ and dives deep into how they occur in organizations and our personal judgments.

I end this with a quote from Daniel Kahneman’s books, Noise:
“If there is conclusion bias or prejudgment, the evidence will be selective and distorted: because of confirmation bias and desirability bias, we will tend to collect and interpret evidence selectively to favor a judgment that, respectively, we already believe or wish to be true.”

1 An empirical study is a type of research that gathers evidence through observation or experience.
2 The effect size is a measure that shows the strength of a statistical claim.

1 #showyourwork - Building a Community of Practice

With the onset of the pandemic, all universities had to shift to online format. For some, in-person courses had always been the main format and the sudden transition to online courses posed a huge challenge to those involved, especially, faculty and students. When I was asked to provide support to faculty, I saw this a great opportunity to put research into practice and build a community of practice, rather than me being the only person to save them. The following is what I did to support highly anxious faculty, some of whom were intimidated by technology. Read More…