A central PRACCIS scaffold is the MEL Matrix, which encourage students to systematically weigh and evaluate all the evidence as they develop or decide among models.
Central to the practice of modeling is the coordination of models with evidence. When students create their own models, they should ensure that their models are consistent with the evidence. When they choose among models, they should take into account all the evidence and judge which model is better supported by the evidence. When evaluating models against evidence, students should revise models in ways to better explain the evidence.
These modeling practices pose challenges for students, which can include:
- When there is more than one or two pieces of evidence, it can be difficult to keep track of all the evidence.
- Students may give precedence to one or two pieces of evidence, ignoring other important and relevant evidence.
- Students may have difficulties evaluating the quality of evidence.
- They may be unsure of how the evidence is related to the models under consideration.
To help students develop competence in coordinating theories with evidence, we have developed a scaffold that we call the Model-Evidence Link (MEL) Matrix. Students typically work on completing the MEL Matrix in pairs or small groups, although teachers may sometimes ask individuals to complete them, as well. Students are encouraged to try to reach consensus on their arrows, if they can; in this way, if they do not reach consensus, they will fully grasp the arguments for different positions.
You can see an example of a MEL Matrix (right). There are three salient features.
-
Arrows designating the relationship between a piece of evidence and the model. For each piece of evidence, and each model, students determine whether the evidence has one of these five relationships to the model:
- Strongly supports
- Supports
- Is irrelevant
- Contradicts
- Strongly contradicts
Students must think systematically about how each piece of evidence is related to each model. We include the “irrelevant” option to reflect the reality that not all information that is brought forward on a topic is actually relevant; some should be dismissed as irrelevant.
The reason why we include both “strongly support” and “support” (and both “strongly contradict” and “contradict”) as options is to encourage more discussion by students. Whereas students may readily agree that a particular piece of evidence supports Model A, they may not so readily agree on whether it supports or strongly supports or just supports the model, and so more discussion is needed to explore the exact relationships between the evidence and the model and to consider arguments bearing on how strong this relationship is.
-
The matrix format. The matrix format enables students to easily keep track of how all the evidence is related to each model under consideration. It also presents a handy summary format so that students can easily evaluate which model (if either) is better supported by the evidence.
An earlier version of the MEL scaffold employed diagrams with criss-crossing arrows. But the criss-crossing arrows can become confusing, and we have found that the matrix format works better for helping students and teachers keep track of the evidence-model relations.
-
Evidence evaluation. In the small squares inside each “cell” representing the evidence, you can see a small square in which students enter numbers (0, 1, 2, or 3). The numbers indicate the quality of the evidence. Very high quality evidence (with good measures, careful procedures, large sample sizes, etc.) rates a “3.” Very poor evidence that should not be considered at all (perhaps just a random anecdote) would rate a “0.” Evidence that falls between these two endpoints in quality would be rated “1” or “2.”
The use of these numbers can be supported by having students develop class criteria for what counts as “good evidence” (see the page on Epistemic Criteria for examples).
The evidence evaluation component of the MEL matrix encourages students to consider and discuss evidence quality as part of making judgments about which model is better supported by evidence. Students learn to give less weight to lower-quality evidence (and no weight to zero-quality evidence), and the matrices help them keep track of all this information.
In short, the MEL matrix provides a context for extensive discussions among students about model-evidence coordination and evidence evaluation, and it provides them with a framework for systematically thinking about all the evidence and the quality of the evidence. For these reasons, our project has found it to be a very valuable modeling tool.