CRDG’s Paul Brandon and George Harrison are serving in a technical advisory role for the development of the Hawaiian Language Immersion Assessments, which, when they are completed, will measure student learning on the math and language arts concepts articulated in the Common Core State Standards (CCSS) and the science concepts articulated by the Next Generation Science Standards (NGSS). The tests, currently in development for grades 3 and 4 students in Hawaiian immersion schools, are not translations of existing tests used to measure standards in the CCSS and NGSS, but are being developed specifically for the Hawaiian immersion context. The primary test development team is headed by faculty in the University of Hawai‘i at Mānoa’s College of Education. Additional contributors are drawn from the Hawai‘i Department of Education; the University of Hawai‘i at Manoā’s Hawai‘inuiākea School of Hawaiian Knowledge; the University of Hawai‘i at Hilo’s College of Hawaiian Language, Ka Haka ‘Ula O Ke‘elikōlani as well as from the private sector. Three advisory committees are helping to guide the project: first, the Project Advisory Group (PAG) for project as a whole; a second, the Hawaiian Advisory Committee (HAC) for issues related to culture and language; and a third, the Technical Advisory Committee (TAC) to address technical issues involved in assessment.
Test developers must consider both stakeholder concerns and feasibility, and must incorporate them into a test instrument that is psychometrically sound; that is, a test that has high reliability and measures the content it is intended to measure—in other words, a test that can reliably show whether students learned the material or not. Brandon and Harrison are serving on the technical advisory committee, where their role is to review the assessment development process and provide advice or leading questions to help the developers build a strong validity argument. Some types of validity evidence, for example, serve to strengthen the case that (a) the instrument measures a representative sample of the intended content; (b) the students are thinking through the test items in a manner that is consistent with what the developers had in mind; (c) the test items function consistently with each other (that is, the items that were developed to measure an intended construct “stick together” in their patterns of responses and that those items across constructs don’t stick together as well); (d) there are as few instances as possible of construct-irrelevant variance—for example, that a student who actually has the ability to solve a math problem is not prevented from correctly responding to a test question because of the difficulty of the language in the question; the language, in this instance, is irrelevant to the math construct being tested. Other areas of discussion include predictions of possible sources of item bias; the inclusion of multiple stakeholders in the development of the assessment, as well as a level of buy-in by those stakeholders to ensure the success of the assessments; and considerations for the assessments’ continued development.
The project is developing three assessments: one for Hawaiian language arts (HLA), one for mathematics, and one for science, addressing the Next Generation Science Standards. The HLA and mathematics assessments are slated to be field tested in the immersion schools in the spring of 2015, with the primary purpose being to review the functioning of the items. Operational use of the instruments cannot proceed until after this type of information is collected and a stable, acceptably functioning instrument is available. An important first step in the process is identifying and prioritizing the student learning outcomes. Although the same, or equivalent, outcomes have been written into the CCSS and NGSS, those for this context must be adapted to address a different language and culture. Because the validity argument stems from these intended outcomes, technical advice is especially important in these early efforts.