Epistemic Criteria

In PRACCIS, epistemic criteria are shared, public standards that can be used to judge the quality of models, evidence, arguments, and so on. For example, epistemic criteria for evaluating models includes that good models fit the evidence, including clear mechanisms, and so on. Classes develop these criteria collaboratively and use them to evaluate their own and others’ work.

Among the core PRACCIS scaffolds are scaffolds of epistemic criteria of quality. These are the criteria used to judge the relative quality of the scientific artifacts they consider, and they include:

Criteria for the quality of explanatory models
Scientists use a variety of criteria for evaluating explana­tory models—e.g., the scope and diversity of evidence explained, the extent to which there is inconsistent evidence, the degree of fit with evidence, the coherence of the model with accepted theories, its fruitfulness, and the extent to which it can solve the empirical problems confronting its rivals (Donovan, Laudan, & Laudan, 1988; T. Kuhn, 1977).

Criteria for strength or quality of evidence
Scientists also deploy criteria for evaluating evidence (e.g., Kitcher, 1993)—e.g., evidence from studies that use accepted methods is viewed as strong, as is evidence that is relevant and verifiable. Evidence that discriminates sharply between two theories may be viewed as stronger than evidence that is more ambiguous (Mayo, 1988).

Criteria for good arguments
Researchers who have studied scientists’ rhetoric have identified criteria that guide the construction of good scientific arguments (e.g., Latour, 1987; Bazerman, 1988). Examples include the anticipatory rebuttal of likely objections from opponents, the linking of a theory to the empirical and theoretical work of others, the highlighting of methodological strengths and weaknesses of evidence, and the elaboration of evidential and theoretical connections.

In the PRACCIS program, we have found public epistemic criteria to be a powerful scaffold. Students work as a class to develop criteria such as criteria for good models, and then they use their agreed-upon criteria to evaluate their own and their peers’ models. Figure 1 shows an example of criteria for good models developed by one class soon after they had begun learning to reason with models near the beginning of the school year:

Example materials

Classes develop these criteria collaboratively and use them to evaluate their own and others’ work.

The list below presents a different list of criteria developed this past year by a class of seventh graders. These criteria are criteria for what makes “good evidence.” Students used these criteria to evaluate the quality of evidence they were considering in order to decide how much weight to give the evidence when developing and deciding among alternative models.

  • It explains.
  • It relates.
  • It does not point to anything else.
  • It requires few inferences.
  • It is detailed and relevant.
  • It disproves another model.
  • Its procedure is good and accurate.
  • It has a good sample size.
  • It is a controlled experiment.
Example materials

Figure 1. One class’s public criteria for good models

Criteria like the ones shown above become public norms for discourse and reasoning. Students refer to these criteria in class discussions, group discussions, and individual work.

Classes develop these criteria collaboratively and use them to evaluate their own and others’ work.

Example materials

Figure 2. Claim, Evidence and Reasoning