Original wording preserved; lightly grouped into universal sections so an agent can deep‑link by purpose (context, assessments, supports, implementation, compliance, troubleshooting). Minimal, clearly labeled updates cite IDEA regs and digital accessibility guidance.
Converted from DOCX • Grouped on 2025-09-24
Assistive Technology Screening & Evaluation — Practical Guide
Update (2025-09-24): Distinguish accessible formats (same content, different form—braille, large print, audio, accessible digital text) from content modifications. See AEM Center guidance. source
Purpose-built structure combining SETT/WATI core, a Learning-Impact layer (grade-level & construct), and UDL feature-menu choice where appropriate.
How to use this guide: Start with the One-Page Flow. If you choose trials, apply the Learning-Impact layer and (when the construct allows) a small UDL feature-menu. Then copy IEP language from Part VII.
Update (2025-09-24): IEP Teams must consider assistive technology when developing, reviewing, or revising an IEP (34 CFR §300.324(a)(2)(v)). AT must be provided when required for FAPE and may include home use if needed (34 CFR §300.105). AT services include evaluation, acquisition, customization, coordination, and training for students, families, and staff (34 CFR §300.6). source (300.324) source (300.105) source (300.6)
1) Screening (SETT + AIM quick check) → If clear need & known solution, document features + train; else go to trials/eval.
2) Trials (in real class tasks at the same cognitive demand): gate check (grade-level? construct intact? metrics defined?) → plan (Barrier→Feature→Routine→Evidence) → 10–15 school days with checkpoints → collect 6–8 data points. Always include student data chats to determine how the student perceives the impact of the assistive technology, the usefulness, and the effect on academic progress.
3) Decision rules: if participation ↑ and rubric/accuracy ↑ and prompts fade → adopt & scale; if access ↑ but academics flat → adjust routine/features/tasks; if still flat with fidelity → alternative features or broaden hypothesis / comprehensive eval.
4) IEP/504 write-up & Implementation: features (brand-agnostic), training, maintenance, AIM/AEM path, home use if needed.
Update (2025-09-24): When materials are digital, align with current Web Content Accessibility Guidelines (WCAG 2.2, Level AA) to reduce access barriers. source
5) Review & transition: periodic effectiveness checks; carry forward feature sets and training artifacts.
Grade-Level Anchor & Construct.
Name the grade-level standard(s) and the construct you are measuring/teaching. If a tool changes the construct (e.g., read-aloud on a decoding test), use it for instruction but not on that assessment.
Barrier → Feature → Routine → Evidence.
Write a one-line hypothesis (barrier) → specify features (brand-agnostic) → define the classroom routine (model→guided→independent with fade) → name the evidence you will collect.
Success metrics.
Participation (independence % on grade-level tasks), Quality/Achievement (rubric/accuracy tied to the standard), Efficiency (time on real tasks), and Generalization (number of classes/contexts used without prompts).
Decision rules (after 4–6 points).
Present Levels (example):
“With text-to-speech (sentence rewind + heading nav) and a two-column note template, Student completes 90% of grade-level informational text tasks independently and earns 3/4 on a structure rubric (RI.7.5) across ELA and Science.”
Supplementary Aids/Services (example):
“Access to TTS with synchronized highlighting, digital graphic organizers; training for student/family/staff; home use as needed for FAPE.”
Goal tie-in (example):
“Given grade-level articles, Student will produce structure maps scoring ≥3/4 in 4/5 opportunities using AT features over 9 weeks.”
□ The trial uses a current unit task at the same cognitive demand as peers.
□ The tool keeps the construct intact for this task/assessment.
□ Participation and achievement metrics are defined (independence %, rubric/accuracy).
Barrier: _________________________________
Feature(s) (brand-agnostic): ______________
Routine (model→guided→independent + fade): __
Evidence (participation %, rubric/accuracy, time/effort, satisfaction): __
Anchor: RI.7.2, RI.7.5 (central ideas; text structure).
Hypothesis: Visual reading rate limits comprehension on long articles; with TTS + organizer, student can analyze structure.
Features: TTS (sentence/paragraph rewind, speed control), headings navigation; two-column notes template.
Routine: Model once daily; student self-runs TTS; exit ticket = structure map + main idea.
Success: ≥80% tasks completed independently; structure-map rubric ≥3/4; cycle time down 25%.
Decision: Metrics met → adopt + home use for longer readings; decoding instruction continues separately.
Anchor: 5.NF.B.3
Hypothesis: Place-value/representation errors block success; with CRA + bar models, student can interpret and solve.
Features: Base-ten blocks & place-value chart, bar/tape diagram templates, number line.
Routine: 10-min CRA warm-up → 15-min problem set; prompt fade plan.
Success: ≥80% of problem types solved with correct representation; rubric ≥3/4; fewer reversals; independence across word problems.
Decision: Metrics met → adopt; calculator allowed for computation during multi-step tasks to preserve reasoning construct.
You got it. I went digging and pulled the most solid, school-friendly guidance (WATI, QIAT, SETT, and the 2024 U.S. Dept. of Ed AT guidance). Here’s a brainstorm outline / guiding ideas for an Assistive Technology (AT) screening & evaluation process you can build on.
Assistive Technology Screening & Evaluation — Guiding Outline
1) Purpose & guardrails
Why AT? Ensure meaningful access/participation and progress in the general curriculum; when needed, AT must be provided for FAPE (IDEA 34 CFR §300.105). ()
When to consider? Every IEP meeting must consider the child’s need for AT devices/services (34 CFR §300.324(a)(2)(v)). ()
What “AT service” includes: Functional evaluation in the student’s customary environments, device acquisition, coordination, and training for student, family, and staff (34 CFR §300.6). ()
Fresh federal guidance (2024): OSERS/OET Dear Colleague + “Myths & Facts” clarify obligations, dispel misconceptions (e.g., training is part of AT services; home use when needed), and reinforce considering AT at every IEP meeting. ()
2) Screening vs. evaluation (quick distinctions)
Screening = quick look to flag potential AT needs and decide if trials or a formal evaluation are warranted; it’s not diagnostic, has high sensitivity, and must be followed by deeper assessment if concerns arise. ()
Evaluation (functional) under AT services = team-based, student-in-context, task-focused process with data from real environments; may require parent consent if part of a broader special ed evaluation protocol in your district. ()
3) Team & frameworks
Team: Student + family, general/special ed teacher(s), related services (SLP/OT/PT, etc.), AT/AIM specialist, admin as needed.
Framework for thinking: SETT (Student, Environments, Tasks, Tools) to drive feature-matching and avoid “device first” thinking. Use it from intake through implementation and review. ()
Quality benchmarks: QIAT indicators across the service lifecycle—Consideration, Assessment, AT in the IEP, Implementation, Evaluation of Effectiveness, Transition, Admin Support, PD. Use the matrices to self-audit practice. ()
4) Triggers for AT screening
Access barriers (reading/writing/communication/executive function/vision/hearing/motor).
Persistent accommodation needs not solved by Tier 1 UDL supports.
New courses/contexts (e.g., lab science, geometry, CTE) that introduce novel task demands.
Transition points (grade band changes, school moves, postsecondary planning).
Requests from student/family/teachers.
5) Rapid screening flow (1–2 pages max)
Brief intake using SETT prompts (what tasks are hard, where, with what materials/platforms?). ()
Review of prior supports & AIM (are materials accessible today?); note immediate “no-tech/low-tech” trials that can start this week.
Decision point:
Clear need & known solution? → document device/service features in IEP and start implementation/training.
Unclear? → initiate structured trials or refer for AT evaluation. (Many states recommend documenting a time-bound trial plan when consideration is inconclusive.) ()
6) AT evaluation (functional) — process map (WATI-style)
Step 1: Information gathering (student profile, environments, task demands, existing tools, AIM status; interviews/observations). (Docslib)
Step 2: Feature matching (define features, not brands, needed to accomplish prioritized tasks; align with SETT). ()
Step 3: Structured trials (1–3 candidate solutions): write a trial plan with tasks, settings, success metrics, training supports, duration, and data sheets. WATI provides ready-to-use forms (Decision-Making Guide, Trial Use Summary, Observation Guides). ()
Step 4: Analyze results and decide (adopt, continue trial with adjustments, or try alternatives).
Step 5: Implementation plan (responsibilities, training, maintenance, AIM procurement, home use if needed for FAPE). ()
7) What to measure during trials (QIAT “Effectiveness”)
Outcome-tied data: accuracy, speed, independence, quality, participation, satisfaction—across natural routines and multiple environments. ()
Both quantitative & qualitative evidence; schedule regular review and be ready to adjust tools, tasks, or supports. ()
8) Documenting in the IEP/504 (don’t bury the lede)
Present levels include AT/AIM access needs and current performance with tools.
Supplementary aids/services & related services list features (e.g., “text-to-speech with synchronized highlighting,” “speech-to-text with custom vocabulary”), training for student/family/staff, and environmental supports. ()
Home/community use noted when required for FAPE (case-by-case per §300.105(b)). ()
AIM procurement path (NIMAS/NIMAC, Bookshare, state AEM sources) and delivery timelines. (Many state AEM pages reference the 2024 AT guidance; align locally.) ()
9) Implementation & training plan (QIAT)
Written plan: who trains whom, on what, by when; initial supports; troubleshooting; maintenance.
Learning opportunities are ongoing for the student, family, and staff, not one-and-done. Adjust based on performance data. ()
10) Review & transition
Progress monitoring: periodic checks against IEP goals and task performance (update tools/strategies as tasks evolve). ()
Transitions (school changes, post-secondary): carry forward feature sets, vendor-agnostic descriptions, training artifacts, and agency connections. QIAT includes specific indicators for AT in transition. ()
11) Common pitfalls to avoid (from QIAT & 2024 guidance)
Choosing brands before defining features and tasks (flip it). ()
Skipping training and implementation plans. ()
Collecting data only in 1 setting or with vague outcomes. ()
Treating screening as a diagnosis; not following up with evaluation when flags appear. ()
Failing to provide home use or AIM when needed for FAPE. ()
12) Ready-to-use pieces you can adopt
WATI: Consideration → Assessment flow, trial forms, observation guides, decision-making guides. ()
SETT prompts & report templates (many districts publish narrative templates). ()
QIAT matrices for self-assessment + common errors (great for PD and district policy alignment). ()
State AT/AEM sites with updated (2024) consideration guides aligning to federal guidance. ()
AT for Learning Impact — add-on to the AT screening/evaluation flow
0) Prime directive (what success means)
Success = access + academic progress. Regular tech becomes assistive when the IEP team identifies it as necessary for FAPE; “necessary” means it enables the student to participate and make progress in grade-level standards, not only to “do the task.”
1) Grade-Level Anchor (start here, always)
For the content area in question, name the grade-level standard(s) and the construct you’re actually measuring/teaching.
Example (ELA 6): RL.6.2 determine theme and summarize. Construct = comprehension of literary text, not oral reading rate.
Example (Math 5): 5.NF.B.3 interpret fractions as division. Construct = reasoning about quantities/operations, not handwriting neatness.
If the tool would change the construct being assessed, use it for instructional access, but not on construct-bound assessments.
2) Barrier → Learning Hypothesis (not just “access problem”)
Write a one-line hypothesis that ties the barrier to the grade-level task:
“Word-level decoding is slow → comprehension collapses on long texts → can’t meet RL.6.2 without alternate input (TTS) while decoding is remediated.”
“Place-value confusions → multi-step problems fail despite good reasoning → needs CRA tools and representation scaffolds to meet 5.NBT/5.NF.”
3) Academic Impact Targets (what will change because of AT)
Define measurable outcomes tied to the standard, not only the tool:
Participation: % of grade-level tasks completed independently (with the AT) in the core class.
Quality: rubric score on the standard (e.g., summary rubric, problem-solving rubric).
Efficiency: time-on-task/words-per-minute for the grade-level text or problem set, not just random materials.
Generalization: # of classes/contexts where the workflow is used without prompts.
4) Design Trials inside core instruction (not side quests)
Write the trial plan so data come from authentic classroom tasks at the same cognitive demand peers face.
Trial plan must include:
Tasks & settings: which unit lessons/assignments, which platforms (LMS, Docs, Desmos…), and where (whole class, labs).
Feature set (brand-agnostic): e.g., “text-to-speech with synchronized highlighting,” “digital outline + graphic organizer,” “CRA place-value kit.”
Instructional routine: who models, how many reps/day, what prompts/fade plan.
Success metrics: the academic targets above + satisfaction.
Duration/decision date: e.g., 10–15 school days with 2 checkpoints.
5) Decision Rules (learning impact, not just usability)
If participation ≥ 80% and rubric/accuracy improves and routine fades → adopt and scale.
If access improved but no academic lift → change the instructional routine, the features, or the task design (often the issue is insufficient modeling, wrong text structure, or not enough time in core tasks).
If progress is still flat after fidelity fixes → try alternative features or broaden the hypothesis (e.g., add morphology instruction with TTS for ELA; add representation translation mini-lessons with CRA for Math).
Quick tools you can drop into forms
A) Grade-Level Alignment Check (gate before any trial)
Is the trial task from the current unit at the same cognitive demand? (Y/N)
Does the tool keep the construct intact? (Y/N)
Is there a participation metric (completion/independence) and an achievement metric (rubric/accuracy) defined? (Y/N)
Any “No” → fix before starting. This is how we avoid the “ramp only” trap.
B) “Barrier → Feature → Routine → Evidence” chain (fill once per target)
Barrier: (e.g., decoding fatigue on >400 words)
Feature(s): (e.g., TTS with sentence-level rewind, heading nav)
Routine: (e.g., teacher models 2 mins; student runs TTS + notes; exit prompt = one-sentence gist)
Evidence: (participation %, rubric, time, independence)
C) Minimal data table (use during class, fast to score)
Collect 6–8 data points across real lessons; that’s enough to see slope + level.
Two concrete scenarios (how this looks)
Reading (Grade 7, informational text)
Anchor: RI.7.2, RI.7.5 (central ideas; text structure)
Hypothesis: Slow visual reading blocks comprehension on long articles; with TTS + organizer, student can analyze structure.
Trial features: TTS (sentence/paragraph rewind, speed control), digital headings nav, 2-column notes template.
Routine: Model once daily; student self-runs TTS; exit ticket = “structure map + main idea.”
Success: ≥80% tasks completed independently; structure map score ≥3/4 on rubric; article cycle time reduced 25%.
Decision: After 10 days, metrics met → adopt; add home use for longer readings; keep decoding instruction separate.
Math (Grade 5, fractions as division)
Anchor: 5.NF.B.3
Hypothesis: Place-value/representation errors, not reasoning, block success; with CRA + bar models, student can interpret and solve.
Trial features: Base-ten blocks & place-value chart, bar/tape diagram templates, number line.
Routine: 10-min CRA warm-up → 15-min problem set; prompt fade plan.
Success: ≥80% of problem types solved with correct representation; rubric ≥3/4; fewer reversals; stable independence across word problems.
Decision: Metrics met → adopt; allow calculator for computation during multi-step tasks to preserve reasoning construct.
How this writes into the IEP/504 (example snippets)
Present Levels: “When using text-to-speech with synchronized highlighting and a two-column note template, Student completes 90% of grade-level informational text tasks independently and earns 3/4 on a structure rubric (RI.7.5) across science and ELA.”
Supplementary Aids/Services: “Access to TTS with sentence rewind & heading navigation; digital graphic organizers; training for student & staff on workflow; home use for assignments as needed to access grade-level text.”
Goal tie-in: “Given grade-level articles, Student will produce structure maps scoring ≥3/4 in 4/5 opportunities using AT features, over 9 weeks.”
Common failure modes (and fixes)
Trials on ‘practice’ materials → you can’t see standard-level impact. Fix: trial inside current unit tasks.
Only timing data collected → speed ≠ learning. Fix: add rubric/accuracy tied to the standard.
Great tool, no routine → usage collapses in class. Fix: scripted model → guided practice → independent with fade plan.
Brand first, feature second → mismatched tools. Fix: define features from the hypothesis, then pick tools.
UDL-infused AT consideration (choice done right)
1) Where UDL fits in the flow
Tier 1/2 (UDL first): Offer feature-equivalent options so students can choose how to engage/express (e.g., type, dictate, or handwrite). Log what they choose and how it impacts participation/quality.
AT screening/eval: If choice among UDL options still leaves a barrier, escalate to AT trials—but keep the choice mindset when feasible (a small, pre-approved tool menu) and tie it to grade-level outcomes.
2) “Feature menu” > “brand choice”
Define the function, then list 2–3 acceptable tools that deliver it. This keeps classrooms sane and data comparable.
Example — Writing output
Function/feature set: produce legible text efficiently; supports spelling/grammar as needed.
Menu (feature-equivalent):
Keyboarding + word prediction
Speech-to-text with live editing
Handwriting with pencil grip + scan/OCR to text
Not a free-for-all: pre-install/train only these; document where/when each is appropriate.
3) When to allow choice vs. prescribe one tool
Use a quick decision matrix:
4) Teach the selection routine (student agency)
Give the student a tiny “IF–THEN” card and practice it:
IF short response ≤ 3 sentences → try keyboarding first; IF fatigue or pain → switch to speech-to-text.
IF timed writing → default to speech-to-text; IF noisy room → switch to keyboard + word prediction.
IF math explanation → type with equation editor or record brief audio + typed summary (per task rules).
Goal: over time the student self-selects and can explain why.
5) Data you collect (keeps choice accountable)
A 10-second row during class is enough:
Look for patterns: “With speech-to-text, quality↑ and time↓ on essays; with keyboard only, time↑ and rubric flat.” That drives your adopt/adjust call.
6) Trial plan language (UDL + AT together)
Tasks: Grade-level assignments from the current unit (not side worksheets).
Menu: “Student may use (a) keyboard + prediction, (b) speech-to-text, or (c) handwriting + scan/OCR.”
Routine: Teacher models both A and B; student chooses using IF–THEN card; fade prompts by day 5.
Metrics: Participation %, rubric score on the standard, independence, and time.
Decision: After 8–10 data points, adopt the tool(s) that improve rubric + independence, not just speed.
7) IEP/504 write-ups (example snippets)
Present Levels: “With a feature-equivalent menu (keyboard+prediction or speech-to-text), Student completes 90% of ELA writing tasks independently at a rubric score ≥3/4.”
Supplementary Aids/Services: “Access to feature-equivalent writing tools: (1) keyboard with word prediction, (2) speech-to-text with live editing, (3) handwriting + scan/OCR when appropriate. Staff and student receive training on selection routines; menu may narrow based on data.”
Assessment accommodations: Specify per construct (e.g., “speech-to-text allowed on content writing; not on spelling assessments measuring handwriting/orthography”).
8) Guardrails so UDL choice doesn’t become chaos
Menu size = 2–3 max per function.
Pre-flight: install, whitelist, and train only those tools.
Default + fallback: pick a default workflow (the one that wins most often) and one pre-taught fallback.
Anchor to standards: every trial uses real tasks from the unit and includes a rubric/accuracy metric.
Construct integrity: if the tool changes the construct, it’s for instructional access, not for that assessment.
9) Where UDL choice doesn’t apply
Dedicated AAC (dynamic display, eye-gaze, switch scanning): consistency and motor learning trump choice; you may still have contexts (e.g., partner-assisted alphabet board as backup).
Safety-critical mobility/access: one prescribed method, with an emergency backup.
10) Quick exemplars
Reading access (grade-level articles)
Function: comprehend complex text.
Menu: (1) TTS with synchronized highlighting, (2) human read-aloud per policy, (3) magnification with heading nav.
Data: structure-map rubric + independence.
Decision: adopt TTS as default; magnification as fallback for short texts.
Math problem solving
Function: represent & reason.
Menu: (1) bar/tape diagram templates, (2) virtual manipulatives, (3) algebra tiles.
Data: correct representation rate + problem-solving rubric.
Decision: adopt bar models; tiles as fallback for factoring units.
Written expression
Function: produce/edit text.
Menu: (1) keyboard+prediction, (2) speech-to-text, (3) handwriting+scan/OCR (short tasks).
Data: writing rubric + independence/time.
Decision: speech-to-text default for extended drafts; keyboard for revisions.