Assessments in the Hot Spot

Testing Interface Design
Ethical considerations with assessment accommodations should be further researched with individual variations as a valid and reliable alternative to mainstream testing.

Metric variables can affect the validity of the assessment phase in evaluation. Intrinsic factors reflect the individual. The amount of sleep, health, emotional state, learning preference and cultural diversity can generate varied results in testing.

An open accommodation e-test would be valid and reliable if all persons shared the same opportunities to access accommodations. Students would be encouraged to access accommodations as a test taking strategy and e-tests would record accessed accommodations. Students that relied heavily on accessing accommodations would have the opportunity to receive remediation in study skills if necessary. Many disabled students would be able to take their tests alongside their peers without being pulled out to have the tests read to them by special education teachers.

The following assessment strategy outlines a proposal to open accommodations to all students. A sample assessment prototype was constructed to further illustrate this purpose. (Screen shots from the sample prototype are incorporated throughout the article.)  Hot spots are incorporated in the example prototype.

The following screen shots illustrate the various lay-outs that the test-taker can select for using hot spots rather than multiple choice, by clicking directly on the most appropriate answer.  Hot Spots eliminate the need for alpha-linear answers that can be distracting to the test-taker.


School districts can lose government funding or worse, administrative control due to poor test scores. Federal law mandates that schools provide opportunities for testing accommodations to students with disabilities. With so much at stake, school districts are spending a bulk of the school year teaching multiple choice test strategies to students in order to increase test scores.

As educators, we have an ethical obligation to teach students how to access their individual learning style rather than funnel all student learning into linear, text based, multiple choice thinking. Studies of mainstreamed assessments with testing accommodations for all populations should be conducted with the highest of priorities.


An example exam has been constructed to demonstrate what open-accommodation standardized e-testing should be modeled after. Online assessments should be purposefully planned using proven learning models to reduce metric variables among learners. Learner perception is influenced by an intuitive interface. A web interface is selected by the learner before the test to optimize the performance of the test. Web interface selection and all optional accommodations selected by the learner are recorded to give educators more information about the learner. National standardized tests should be repurposed for their original intent; as tools to get a clearer picture of the learner’s profile.

Linear testing strategies are emphasized even though studies show that human brains are not linear in structure (U.S. Department of Education, 1997). Design considerations of the following e-test prototype provide the learner with an optional testing interface. The learner can select linear or non-linear test interface strategies. The linear test interface would be considered traditional in design; the test author selects the order of the sub-categories for the learner to test in.

Emotional well-being, test anxiety and stress can greatly factor metric variables. The e-test illustrates improvements for visual learners that prefer information organized and presented in a non-linear, central graphic format.

The tester can select subject subsections according to their individual preference. For the learner that prefers to tackle more familiar questions at start, this can help reduce stress.

A person who experiences test anxiety can have relaxing music played on a headset. A mental relaxation tutorial can be accessed before the test at the option of the user. With authoring technology, a person who is losing concentration can choose to take breaks with stimulating computer brain games between test sections.

The following e-test offers a variety of accommodation options for learners with visual or auditory processing difficulties. Test takers can listen for auditory responses to hot spots through a headset.

A dyslexic learner can choose a cursive font to help prevent mistakes due to letter reversal during testing.

Inclusion of English language learners on large scale assessments is a critical issue nationwide (Butler, F. A. and Stevens, R 1997). Examples, prompts, questions and directions can be accessed through a computer generated voice in English or in the student’s native language.

An accommodation that allows English language learners to select key test words in their native tongue can easily be constructed.

Such assessments can further be authored to track the amount of times that the English language learner relied on reverting to their language of origin in order to measure English confidence proficiency.

Test questions that specifically measure text reading ability should be offered in both traditional linear style and with context cues to measure the differences in reader comprehension.

In other words, how often does the reader rely on context cues in order to comprehend?

The e-test proposal incorporated content on three grade levels in three different subjects.

Reading content is illustrated in the elementary grade levels.

Math content was chosen to demonstrate e-testing in the middle school level.

Content from a psychology 101 course demonstrates e-testing at the adult education level.

Test questions in each subject and age group have been further broke down into a Bloom’s Taxonomy grid to demonstrate testing on each level. An example from each of the Bloom’s levels has been made into a question on the test in each subject. Each of the three categories incorporates hot spots on the first five levels of blooms.

The sixth level of blooms, the evaluation phase, addresses an opportunity for students to provide written responses to demonstrate higher cognitive skills.

Data Collection and Analysis

Instructional technologists will collaborate with educators and state employees to develop a more comprehensive prototype. Measurable objectives for standardized testing are to be identified from state educational departments. Potential eligible questions may be modified from previously authored tests. Quantitative data will be collected for analysis from the assessments at pilot schools across the nation. Qualitative responses will also be included in the pilot findings.

Program Costs
An analysis of program costs is requested. If a committee approves the test prototype for further research, a grant committee can be selected to analyze costs and request appropriate educational funding opportunities from the government and private institutions for the development of future studies on open-accommodation style, standardized e-testing.

Conclusions and Recommendations
The open-accommodation e-test proposal should be studied in an appropriate assessment research environment for further evaluation. Currently, K-12 assessment accommodations are only available for students with Individualized Education Plans. Students that are eligible for Individual Education Plans must demonstrate a significant discrepancy between mainstream students in order to qualify for accommodations.

Test accommodation strategies should be taught instead of funneling and re-wiring brains to be linear and standardized. Individuals should be directed towards personal learning style accommodations to tackle tests. The viewpoint that testing must be as identical as possible to be reliable and fair should be challenged, all individuals should have access to test accommodations. Currently, educators teach our children multiple choice strategies throughout the school year. Rather, our focus should teach students how to study based on their individual learning preferences and provide testing in their individualized style. If everyone has the same access to these accommodations, then it is still valid and reliable.

Further test developments could provide testers with individualized levels and alternate accommodation views on a “per question” basis. For example, if a tester finds difficulty comprehending a particular question, the tester can select an alternate view such as a non-linear mind-map visual or auditory reading of the question. By tracking the tester’s accommodation choices and frequency during testing, educators have an opportunity to pin-point specific learning strategies that may need to be addressed for remediation. Entire school districts that rely heavily on certain accommodations when testing can receive funding for retraining staff to provide more appropriate teaching strategies.

In conclusion, keep in mind that assessments in general should always be treated as a tool to provide a snapshot into how a learner’s mind assimilates and retains information. The future of test reliability should be considered for research at the evaluation phase as test analysts need more clearly defined metric interpretation methods.

HUST 8/11/10 by Angela Rupert, Graduate Student, LTMS 520 Assessment, Dr. Gerry Post, Instructor, Andy Petroski, Department Chair


Shaw, S. (2008). Essay Marking On-Screen: Implications for Assessment Validity. E-Learning, 5(3), 256-274. Retrieved July 10, 2010, from
Butler, F. A. and Stevens, R (1997). Accommodation Strategies for English Language Learners on Large-Scale Assessments: Student Characteristics and Other Considerations. CSE Technical Report 448. Retrieved July 10, 2010, from
Benson, P. J. (2001). The More Things Change. . .Paper Is Still With Us The Journal of Electronic Publishing vol. 7, no. 2, Dec., 2001. Retrieved July 10, 2010, from idx?c=jep;view=text;rgn=main;idno=3336451.0007.205
Kordel, R. (2008). Information Presentation for Effective E-Learning EDUCAUSE Quarterly, vol. 31, no. 4 (October–December 2008) Retrieved July 10, 2010, from m/InformationPresentationforEffe/163438
Gregg Corr and Ruth Ryder (RIM 2007) Presentation: Monitoring, TA and Enforcement U.S. Department of Education, Office of Special Education Programs 3.29.07 Retrieved July 10, 2010, from
Building the Legacy: IDEA (2004) Idea Partnership. Web. Retrieved July 10, 2010, from &oseppage=1
Making Connections: How Children Learn. Read With Me – A Guide for Student Volunteers Starting Early Childhood Literacy Programs (1997) U.S. Department of Education. Web. Retrieved July 10, 2010, from


About Angela Rupert

Angela Rupert is a freelance instructional designer and development consultant. Contact Angela Rupert for a free consultation at or (717) 480-1747.
This entry was posted in Assessment, Development, education, Human Learning, Learning Technologies, Media Design, Online Learning, standardized testing, Uncategorized, Visual Learning and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s