Asked by 56
Analyzing Informational Text Unit Test
All Answers 1
Answered by
GPT-5 mini
AI
Do you mean you want help analyzing an existing informational-text unit test (reviewing items, scoring, alignment), or that you want me to create a unit test for informational text? Tell me the grade level and any standards (e.g., CCSS) and whether you’ll give me the test to analyze or want me to build one. Meanwhile, here’s a compact, practical guide and tools I use when analyzing or building an informational-text unit test.
Quick decision options
- I will analyze an existing test you provide: paste the items/passages and I’ll run an item-by-item analysis, flag problems, give revision wording, and produce an item-analysis table.
- I will design a unit test: tell me grade, standards, length (number of passages, items, time) and I will draft passages and items plus scoring rubrics.
Checklist for analyzing an informational-text unit test
1. Standards alignment
- Each item mapped to a specific standard (e.g., RI.4.1, RI.5.2).
- Coverage matches your unit’s taught standards and weightings.
2. Text selection and complexity
- Texts are grade-level appropriate (quantitative measures, qualitative features, reader-task match).
- Variety of informational text types (articles, charts, manuals, essays).
- Text features (headings, captions, charts) are used and tested appropriately.
3. Cognitive demand and depth
- Items include a balance of literal (recall), inferential, and analytical (text-evidence, synthesis) questions.
- Text-dependent questions, not answerable without the passage.
- Presence of performance/constructed-response items that require citing evidence.
4. Item quality and clarity
- Clear stem and single correct answer (for selected-response).
- Avoid ambiguous phrasing, negative stems, double negatives, or clues to the correct answer.
- Distractors are plausible and aligned to common student misconceptions.
5. Bias and fairness
- Items/passages free of cultural, socioeconomic, or gender bias.
- No background knowledge outside the expected grade-level should be required.
6. Scoring and rubrics
- Clear analytic rubric for constructed responses (criteria: claim, evidence, reasoning, organization, language).
- Answer keys with evidence citations and scoring exemplars.
7. Practical considerations
- Total length and time are appropriate.
- Layout/readability (font, spacing, accessible images).
- Accommodations available (read-aloud, extended time, translations).
Item-analysis table (use after pilot/administering)
For each item include columns:
- Item ID | Standard | Item type (SR/CR) | Correct answer | Distractor rationale | p-value (difficulty) | Discrimination (point-biserial or biserial) | % choosing each option | Common wrong answer patterns | Revision suggestion
Interpreting common item stats
- Difficulty (p-value): proportion correct. Best items often 0.3–0.9 depending on purpose. For mastery checks, aim >0.7; for discriminating tests, 0.3–0.8.
- Discrimination (point-biserial): ranges from -1 to +1. >0.2 is acceptable; >0.3 is good. Negative values mean high performers got it wrong — investigate content or keyed answer.
- Distractor analysis: any distractor chosen by <5% is likely nonfunctioning — rewrite.
Sample rubric for short constructed response (0–4)
4 — Clear main idea or claim, 2+ specific pieces of textual evidence, explicit explanation of how evidence supports claim, clear organization and precise vocabulary.
3 — Claim, 1–2 pieces of evidence with reasonable explanation, mostly clear organization.
2 — Partial claim or general idea, limited or vague evidence, weak explanation.
1 — Minimal attempt; vague reference to text but no clear explanation.
0 — No response or irrelevant.
Common item-writing problems & quick fixes
- Problem: Question answerable without reading the passage. Fix: Make wording require text reference or specific detail.
- Problem: Distractors implausible (easy elimination). Fix: Base distractors on common student errors or subtle misreadings.
- Problem: Items test vocabulary in isolation with rare words. Fix: Test vocabulary in context and prefer words taught in unit.
- Problem: Negative stem (“Which is NOT…”) causing confusion. Fix: Rephrase positively when possible.
Sample revision workflow
1. Map items to standards and mark gaps/overlaps.
2. Pilot test with a representative sample. Collect item stats and student responses.
3. Review low-discrimination or very easy/hard items. Check for ambiguous wording or miskeyed answers.
4. Revise items and distractors; update rubrics. 5. Re-administer or flag items for exclusion from score reports.
What I can do next
- If you paste your test, I’ll produce an item-by-item analysis using the table above and give revision wording for problematic items.
- If you want a ready-to-use unit test, tell me grade, standards, number of passages, number and types of items, and I will draft passages and items plus rubrics.
Which would you like me to do?
Quick decision options
- I will analyze an existing test you provide: paste the items/passages and I’ll run an item-by-item analysis, flag problems, give revision wording, and produce an item-analysis table.
- I will design a unit test: tell me grade, standards, length (number of passages, items, time) and I will draft passages and items plus scoring rubrics.
Checklist for analyzing an informational-text unit test
1. Standards alignment
- Each item mapped to a specific standard (e.g., RI.4.1, RI.5.2).
- Coverage matches your unit’s taught standards and weightings.
2. Text selection and complexity
- Texts are grade-level appropriate (quantitative measures, qualitative features, reader-task match).
- Variety of informational text types (articles, charts, manuals, essays).
- Text features (headings, captions, charts) are used and tested appropriately.
3. Cognitive demand and depth
- Items include a balance of literal (recall), inferential, and analytical (text-evidence, synthesis) questions.
- Text-dependent questions, not answerable without the passage.
- Presence of performance/constructed-response items that require citing evidence.
4. Item quality and clarity
- Clear stem and single correct answer (for selected-response).
- Avoid ambiguous phrasing, negative stems, double negatives, or clues to the correct answer.
- Distractors are plausible and aligned to common student misconceptions.
5. Bias and fairness
- Items/passages free of cultural, socioeconomic, or gender bias.
- No background knowledge outside the expected grade-level should be required.
6. Scoring and rubrics
- Clear analytic rubric for constructed responses (criteria: claim, evidence, reasoning, organization, language).
- Answer keys with evidence citations and scoring exemplars.
7. Practical considerations
- Total length and time are appropriate.
- Layout/readability (font, spacing, accessible images).
- Accommodations available (read-aloud, extended time, translations).
Item-analysis table (use after pilot/administering)
For each item include columns:
- Item ID | Standard | Item type (SR/CR) | Correct answer | Distractor rationale | p-value (difficulty) | Discrimination (point-biserial or biserial) | % choosing each option | Common wrong answer patterns | Revision suggestion
Interpreting common item stats
- Difficulty (p-value): proportion correct. Best items often 0.3–0.9 depending on purpose. For mastery checks, aim >0.7; for discriminating tests, 0.3–0.8.
- Discrimination (point-biserial): ranges from -1 to +1. >0.2 is acceptable; >0.3 is good. Negative values mean high performers got it wrong — investigate content or keyed answer.
- Distractor analysis: any distractor chosen by <5% is likely nonfunctioning — rewrite.
Sample rubric for short constructed response (0–4)
4 — Clear main idea or claim, 2+ specific pieces of textual evidence, explicit explanation of how evidence supports claim, clear organization and precise vocabulary.
3 — Claim, 1–2 pieces of evidence with reasonable explanation, mostly clear organization.
2 — Partial claim or general idea, limited or vague evidence, weak explanation.
1 — Minimal attempt; vague reference to text but no clear explanation.
0 — No response or irrelevant.
Common item-writing problems & quick fixes
- Problem: Question answerable without reading the passage. Fix: Make wording require text reference or specific detail.
- Problem: Distractors implausible (easy elimination). Fix: Base distractors on common student errors or subtle misreadings.
- Problem: Items test vocabulary in isolation with rare words. Fix: Test vocabulary in context and prefer words taught in unit.
- Problem: Negative stem (“Which is NOT…”) causing confusion. Fix: Rephrase positively when possible.
Sample revision workflow
1. Map items to standards and mark gaps/overlaps.
2. Pilot test with a representative sample. Collect item stats and student responses.
3. Review low-discrimination or very easy/hard items. Check for ambiguous wording or miskeyed answers.
4. Revise items and distractors; update rubrics. 5. Re-administer or flag items for exclusion from score reports.
What I can do next
- If you paste your test, I’ll produce an item-by-item analysis using the table above and give revision wording for problematic items.
- If you want a ready-to-use unit test, tell me grade, standards, number of passages, number and types of items, and I will draft passages and items plus rubrics.
Which would you like me to do?
Submit Your Answer
We prioritize human answers over AI answers.
If you are human, and you can answer this question, please submit your answer.